Currently, videoscale just drops all metas that have other tags
besides video. However videoscale wont change the colorspace or
the orientation of the video so metas tagged as such may be
copied safely. Additionaly, given that videoscale will change
the frame size, we invoke the meta transform implementation
to give it the opportunity to scale accordingly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/548>
Especially when changing the sample rate our timestamp tracking will be
completely off, but even otherwise we would usually lose the last few
samples if we don't drain here as the resampler gets reset if anything
but the sample rate changes.
This is usually not a problem as the first buffer after a caps event
usually has the discont flag set, but can cause problems if
- the caps event is followed by a segment event, which then causes
draining according to the new sample rate
- the caps were changed because of rengotiation due to a reconfigure
event and there is not discontinuity from upstream
In both cases we would output buffers with completely wrong timestamps.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/670>
Stop comparing all timestamps from buffers that are before the segment
with the segment.stop and compare with the actual end times.
Comparing to segment.stop for all the buffers that where before
the segment.stop was incorrect and leading to consuming wrong buffers
and not respecting segment.stop, this is now properly tested.
Expectations for `reverse.10_to_1fps.validatetest` have been fixed to
take that into account and comparing the checksums of the sinkpad and
srcpad expectations makes pretty clear how wrong that was.
(we can see in the expectations that videotestsrc outputs an extra
buffer with pts == segment.stop and this one is now properly dropped
by videorate as bec7f4ad5e aimed at
doing)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/668>
In reverse playback we were not taking into account the current buffer
samples to check if we had reached EOS which was leading to a buffer
with PTS = CLOCK_TIME_NONE containing too many frames followed by a
useless buffer with pts=0 duration=0, and a g_critical issue in
gst_object_sync_values.
Also add a validate based test case.
Without that patch this is how the expectation fails:
``` diff
--- log-asink-sink-expected 2020-05-22 23:22:42.654384579 -0400
+++ log-asink-sink-actual 2020-05-22 23:29:35.671586380 -0400
@@ -27,5 +27,6 @@
buffer: pts=0:00:00.058820861, due=0:00:00.023219955, flags=discont
buffer: pts=0:00:00.035600907, due=0:00:00.023219954, flags=discont
buffer: pts=0:00:00.012380952, due=0:00:00.023219955, flags=discont
-buffer: pts=0:00:00.000000000, due=0:00:00.012380952, flags=discont
+buffer: due=0:00:00.012380953, flags=discont
+buffer: pts=0:00:00.000000000, flags=discont
event eos: (no structure)
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/667>
And fix reverse playback buffer duration computation as in reverse
playback, buffer duration is prev_buffer.pts - buffer.pts not pts -
next_pts (buffers are displayed from buffer.pts + buffer.duration for
a duration of buffers.duration).
This is now tested with the `validate.test.clock_sync.videorate.*`
tests in the default integration testsuite where we check the exact
data flow and the synchronization on the clock behaviour with a
TestClock.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/646>
In reverse playback, buffers have to be displayed at buffer.stop running
time, meaning:
buffer.pts + buffer.duration = prev_buffer.pts
=>
buffer.duration = prev_buffer.pts - buffer.pts
We were setting buffer.duration = next_buffer.pts - buffer.pts which
is not correct.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/646>
Include the Program Stream Map packet type 0xBC in the
set of packets we treat as PES. This fixes typefinding
on MPEG-PS streams with PSM, where the PSM would previously
be considered a loss-of-sync and cause the typefind
to require more data.
The string "\"OTIO_SCHEMA\":" is 14 characters and not 15. Checking for
15 characters would also check for the final '\0', which does not exist
in any otio file as the string is the key of a JSON map.
memcmp() returns 0 (aka FALSE) on match and a difference otherwise.
Previously the typefinder was matching on anything but otio files that
happened to have some curly braces in the beginning of the file.
Fixes a false positive with a MOV file.
Previously configured bufferpool can be expired/inactivate by the
updated caps. Therefore new reconfigure event should be signalled in order to
do allocation query dancing between upstream and downstream again.
Fixes: https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/730
Note that we didn't do it for encodebin, as it has a class struct. We
_could_ techincally use `G_DECLARE_DERIVABLE_TYPE()` for that one, but
that would mean also using a private struct, which is even more work for
no gain.
Unfortunately the OS takes care of bad connections for us, so we can't
get the stats in a platform-independent way. Count total bytes received
as well, platform-independently.
The main_input stream-id would not get reset when going to READY state.
This would cause warnings when trying to reuse the same decodebin3, since
you would get a new STREAM_START event with a new stream-id, which would
collide with the now stale stream-id
The default values were not being set on element initialization. This
was a problem for buffer_duration and buffer_size since they would be
zero initialized, rather then being set to -1. This would cause the
underlaying queue2 element to have no limits and depending on the
streamed file, could cause queue2 to allocate massive amounts of memory.
In decodebin3 and uridecodebin3 the `force-sw-decoders` boolean property is
added. In uridecodebin3 it is only a proxy property which will forward
the value to decodebin3.
When decodebin3 has `force-sw-decoders` disabled, it will filter out in its
decoder and decodable factories those elements within the 'Hardware'
class, at reconfiguring output stream.
playbin3 adds by default GST_PLAY_FLAG_FORCE_SW_DECODERS, and sets
`force-sw-decoders` property accordingly to its internal uridecodebin, also
filters out the 'Hardware' class decoder elements when caps
negotiation.
Added `force-sw-decoders` boolean property in decodebin2 and
uridecodebin. By default the property is %FALSE and it bypass the new
code. Otherwise the factory list is filtered removing decoders
within 'Hardware' class.
uridecodebin sets the `force-sw-decoders` property in its internal
decodebin, and also filters out Hardware class in the
autoplug-factories default signal handler.
playbin2 adds by default GST_PLAY_FLAG_FORCE_SW_DECODERS it its flags
property, and depending on it playbin2 sets the `force-sw-decoders`
property on its internal uridecodebin, also filters out the Hardware
class decoding decoders at the autoplug-factories signal handler.
When the playsink's sink is activated its state is set to READY but it remains
unlinked. So, in order for decodebin3 to potentially reuse the context later on,
the whole playbin3 needs to have it internally stored.
There are in the wild (mp4) streams that basically contain no tracks
but do have a redirect info[0], in which case, qtdemux won't be able
to expose any pad (there are no tracks) so can't post anything but
an error on the bus, as:
- it can't send EOS downstream, it has no pad,
- posting an EOS message will be useless as PAUSED state can't be
reached and there is no sink in the pipeline meaning GstBin will
simply ignore it
In that case, currently the application could try to handle that but it
is pretty complex as it will get the REDIRECT message on the bus at
which point it could set the URL but playbin will ignore it, as
it will only be for the next EOS, it thus need to set the pipeline to
NULL (READY won't do as it is already in READY at that point). And it
needs to figure out the following ERROR message on the bus needs to be
ignored, which is not really simple.
The approach here is to allow element to add details to the ERROR
message with a `redirect-location` field which elements like playbin handle
and use right away.
We could also use the element 'redirect' message in playbin, but the
issue with that approach is that the element will still emit the ERROR
message on the bus, leading to wrong behaviour. That can't be avoided
since in the case the app/parent pipeline is not handling the redirect
instruction, the ERROR message is necessary (and there is no way to
detect that the message has been "handled" from the element emitting the
redirect).
[0]: http://movietrailers.apple.com/movies/paramount/terminator-dark-fate/terminator-dark-fate-trailer-2_480p.mov
Following the [design document] encodebin needs to handle sources that
output multiple streams, for that purpose and to make it simpler,
we ensure that a single segment is outputted to the encoders by using
an `identity single-segment=true` at the beginning of streams chains.
Added API to enable or disable the use of that new feature.
Added support for the encoding profile parser for that new property,
keeping backward compatibility
[design document]: https://gstreamer.freedesktop.org/documentation/additional/design/encoding.html?gi-language=c#rendering-timelines
Setting the property again after it had already been set ran
g_value_unset() but did not initialize it again to g_value_copy() failed
afterwards. Removed the unset as cleanup is done implicitely from
g_value_copy().
Changing the mix-matrix property did not trigger reconfiguration of the
caps, this has been added.
If the matrix is set to an empty matrix, instead of copying this the
matrix is simply disabled by setting mix_matrix_is_set (formerly
mix_matrix_was_set) to FALSE so the mix-matrix is ignored from now on.
Previously this would've only set discont=TRUE and then for all future
buffers simply returned immediately.
Instead we also need to
a) drain previous input until its buffer time
b) update next_ts and base_ts accordingly for the gap
c) actually store the new buffer after the gap so it can be used in
the future and so the old buffer before the gap is gone
Also update the unit test accordingly so that it actually tests for this
behaviour. Previously it only tested that after the gap we got no output
at all.
Due to the use of {set/get}-element_private methods being used to store
the GstSyncStream in the src and sink pads, and the racey nature of pad
destruction, there are numerous ways we can be bitten by race conditions
in the stream synchronizer. Fix that by tying the pads toghether with
references.
streamcombiner element forwards a upstream event only to one sinkpad.
When the streamcombiner is used with encodebin, the sinkpad
corresponding to pass-through path is configured before that of encoder,
and therefore streamcombiner forwards upstream events only to
the firstly configured one (i.e., pass-through path).
By passing NULL to `g_signal_new` instead of a marshaller, GLib will
actually internally optimize the signal (if the marshaller is available
in GLib itself) by also setting the valist marshaller. This makes the
signal emission a bit more performant than the regular marshalling,
which still needs to box into `GValue` and call libffi in case of a
generic marshaller.
Note that for custom marshallers, one would use
`g_signal_set_va_marshaller()` with the valist marshaller instead.
By suggesting possible detection too early, it's possible that
the wrong format is detected. Hold off making suggestions until one
of the following conditions is met:
* Probability > GST_TYPE_FIND_LIKELY
* At least MPEG_MIN_PROBE_LENGTH bytes have been examined
* EOS, in which case the best guess wins
Fixes#628
The blend functions for alpha formats need to do more work than just
doing a memcpy, so we can do a memcpy when we know that a blend is not
actually needed.
1080p AYUV ! compositor background=transparent ! fakesink - 56% faster
Specifically, when we don't draw the background and the first pad we
draw completely covers the output frame, we can just copy it as-is.
The rest of the pads (if any) will get composited on top normally.
If the background is transparent and obscured by a pad that may or may
not have alpha, we can still skip drawing it entirely
AYUV 1080p ! compositor background=transparent ! fakesink - 75% faster
We don't need to waste time drawing the background when one of the
pads completely covers the output and there's no alpha on the pad or
in the video format. Speedups:
I420 1080p ! compositor ! fakesink - 72% faster
I420 1080p ! compositor background=black ! fakesink - 45% faster
Protocols that are in the stream_uris list should always
be streams, no matter what they respond to the scheduling
query. The flag in the scheduling query is just another
way to declare something that needs buffering without the
whitelist, the absence of the flag shouldn't make us ignore
our known protocol list.
Also set is_stream always to a boolean and not a mask value.
If the last WebVTT cue does not have an empty line after it, or if it
does not end with a newline at all, it does not get pushed out and it
won't be displayed.
gst_sub_parse_sink_event() already handles the issue for other subtitle
formats, enable handling it for GST_SUB_PARSE_FORMAT_VTT too.
While at it also add a test for this case.