The goal here is to minimize the work needed to bring all images
to a common format. A better criteria than the number of pads
with a given format is the number of pixels with a given format.
https://bugzilla.gnome.org/show_bug.cgi?id=786078
Crossfading is a bit more complex than just having two pads with the
right keyframes as the blending is not exactly the same.
The difference is in the way we compute the alpha channel, in the case
of crossfading, we have to compute an additive operation between
the destination and the source (factored by the alpha property of both
the input pad alpha property and the crossfading ratio) basically so
that the crossfade result of 2 opaque frames is also fully opaque at any
time in the crossfading process, avoid bleeding through the layer
blending.
Some rationnal can be found in https://phabricator.freedesktop.org/T7773.
https://bugzilla.gnome.org/show_bug.cgi?id=784827
When the pad has received EOS, its buffer may still be mixed
any number of times, when the pad's framerate is inferior
to the output framerate.
This was introduced by my patch in
https://bugzilla.gnome.org/show_bug.cgi?id=782962, this patch
also correctly addresses the initial issue.
When the input is TRICKMODE_KEY_UNITS, we expect to only receive keyframes
which we want to decode/push immediately. Therefore don't queue them.
If upstream didn't send just keyframes (which is the ideal situation), two
different things can happen:
1) Either the subclass checks the segment flags and properly configures
the decoder implementation to only decode/output keyframes,
2) Or the subclass really decodes and outputs everything, in which case
the reverse frames will end up arriving "late" downstream (and will
be dropped). If upstream did properly send GOP in reverse order, we
still end up just showing keyframes (but at the overhead of decoding
everything).
https://bugzilla.gnome.org/show_bug.cgi?id=777094
When caps changes while streaming, the new caps was getting processed
immediately in videoaggregator, but the next buffer in the queue that
corresponds to this new caps was not necessarily being used immediately,
which resulted sometimes in using an old buffer with new caps. Of course
there used to be a separate buffer_vinfo for mapping the buffer with its
own caps, but in compositor the GstVideoConverter was still using wrong
info and resulted in invalid reads and corrupt output.
This approach here is more safe. We delay using the new caps
until we actually select the next buffer in the queue for use.
This way we also eliminate the need for buffer_vinfo, since the
pad->info is always in sync with the format of the selected buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=780682
Always put multiview-caps onto the output caps, assuming
mono if we've got no other information. It's still easy for
downstream elements to override using a capssetter or event
probe if desired.
https://bugzilla.gnome.org/show_bug.cgi?id=776172
When entering this code path, we know that:
We received EOS on this pad.
We consumed all its buffers.
In any case, we want to replace vaggpad->buffer with NULL,
otherwise we will end up mixing the same buffer twice.
https://bugzilla.gnome.org/show_bug.cgi?id=781037
The GSource for dealing with timeouts in
gst_video_convert_sample_async() might be attached to a non-default
context, so we should not be using g_source_remove() on the returned ID.
The correct thing to do is to keep a reference to the actual GSource and
then call g_source_destroy() on it.
https://bugzilla.gnome.org/show_bug.cgi?id=780297
Track how long it takes to generate the first buffer after a flush
as a simple measure of how efficient the decoder is at skipping /
rushing to get to the first decode.
When initializing a timecode from a GDateTime, and the remaining time
until the new second is less than half a frame (according to the given
frame rate), it would lead to the creation of an invalid timecode, e.g.
00:00:00:25 (at 25 fps) instead of 00:00:01:00. Fixed.
https://bugzilla.gnome.org/show_bug.cgi?id=779866
Use G_GUINT64_FORMAT for guint64 values.
Introduced by fcb63e77a9
Found by Alexander Larsson
gstvideodecoder.c: In function 'gst_video_decoder_have_frame':
gstvideodecoder.c:3312:51: error: format '%u' expects argument of type 'unsigned int', but argument 8 has type 'guint64 {aka long long unsigned int}' [-Werror=format=]
Don't guess a timestamp of the start of the segment when running
in reverse mode, as more likely it means we're discontinuous somewhere
in the middle of the segment, and we'll fix up timestamps once
the frames are decoded and reversed.
When a PTS is not set, we still want to store the rest of the
buffer information, or else we lose important things like the
duration or buffer flags when parsing.
This adds a property to select the maximum number of threads to use for
conversion and scaling. During processing, each plane is split into
an equal number of consecutive lines that are then processed by each
thread.
During tests, this gave up to 1.8x speedup with 2 threads and up to 3.2x
speedup with 4 threads when converting e.g. 1080p to 4k in v210.
https://bugzilla.gnome.org/show_bug.cgi?id=778974
In gst_video_time_code_is_valid, also check for invalid
ranges when using drop-frame TC. Refactor some code which
broke after the check was added.
https://bugzilla.gnome.org/show_bug.cgi?id=779010
It was taking the initial input y-offset from the output value, which
only works for y=0 (in which case both are the same). If y > 0, we would
always stay behind the requested input offset and never ever read
anything from the input.
Sometimes there is a human-oriented timecode that represents an
interval between two other timecodes. It corresponds to the human
perception of "add X hours" or "add X seconds" to a specific timecode,
taking drop-frame oddities into account. This interval-representing
timecode is now a GstVideoTimeCodeInterval. Also added function to add it to
a GstVideoTimeCode.
https://bugzilla.gnome.org/show_bug.cgi?id=776447
The flags and field order weren't properly initialized in the
gst_video_info_init().
Furthermore in gst_video_info_from_caps() we might set unitiliazed
values previously, this only sets them if valid.
For drop-frame timecodes, the nsec_since_daily_jam doesn't necessarily
directly correspond to this many hours/minutes/seconds/frames. We have
to get the frame count as per frames_since_daily_jam and then convert.
https://bugzilla.gnome.org/show_bug.cgi?id=774585
Refuse to answer BYTES queries ourselves. The only
time they make sense is on raw elementary streams,
in which case upstream would already have answered.
https://bugzilla.gnome.org/show_bug.cgi?id=757631
It adds a third argument to pass GstBufferPoolAcquireParams
to gst_buffer_pool_acquire_buffer.
If a user subclasses GstBufferPoolAcquireParams, this allows to
pass an updated param to the underlying buffer pool at each
gst_video_decoder_allocate_output_frame_with_params call.
https://bugzilla.gnome.org/show_bug.cgi?id=773165
Usually this information is static for the whole stream, and various
container formats store this information inside the headers for the
whole stream.
Having it inside the caps for these cases simplifies code and makes it
possible to express these requirements more explicitly with the caps.
https://bugzilla.gnome.org/show_bug.cgi?id=771376
Also the format must be fixed on the default raw caps. If not
gst_video_info_from_caps() will fail and
gst_video_decoder_negotiate_default_caps() return FALSE.
The test simulates the use case where a gap event is received before
the first buffer causing the decoder to fall back to the default caps.
https://bugzilla.gnome.org/show_bug.cgi?id=773103
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
They are false positive overflows, because coverity doesn't realize that
hours <= 24, minutes < 60 and seconds < 60 in all functions. Also casting the
number 60 (seconds in minute, minutes in hour) to guint64 for the
calculations, in order to avoid overflowing once we allow more than 24-hour
timecodes.
CIDs #1371459, #1371458
Most of them are overflow related and false positives, but coverity can't know
that these can't overflow without us giving it more information. Add some
assertions for this.
One was an actual issue with flags comparison.
CIDs #1369051, #1369050, #1369049, #1369048, #1369045
We need to take into account the input segment flags to know whether
we should drain the decoder after a new keyframe in trick mode.
Otherwise we would have to wait for the next frame to be outputted (and
the segment to be activated) which ... well ... kind of beats the whole
point of this draining :)
And especially don't use the stream lock for that, as otherwise non-serialized
queries (CONVERT) will cause the stream lock to be taken and easily causes the
application to deadlock.
https://bugzilla.gnome.org/show_bug.cgi?id=768361
Fix problem with the line cache where it would forget the first line in
the cache in some cases.
Keep as much backlog as we have taps. This generally works better and we
could do even better by calculating the overlap in all taps.
Allocated enough lines for the line cache.
Use only half the number of taps for the interlaced lines because we
only have half the number of lines.
The pixel shift should be relative to the new output pixel size so scale
it.
Fixes: https://bugzilla.gnome.org/show_bug.cgi?id=767921
For reverse playback it is important to handle correctly the frame sync
points, which is set when the input buffer doesn't have the DELTA_UNIT flag.
This is handled correctly when decoder is packetized, but when it is not the
frame's sync point is not copied, and the reverse playback never decodes frame
batches.
The current patch adds the buffer's flags to the Timestamp list, where the
timestamp and duration of the input buffers are hold.
There were two consecutive log messages in gst_video_decoder_decode_frame().
Given the information they provide, it is more efficient to squash them into a
single one.
The playback rate is hold in the input_segment member variable, not in the
output_segment, and the parse_gather list was never filled because of that.
This patch changes the comparison with input_segment.
The output segment is only set up after data is output, which might be far in
the future for reverse playback. Also we are here interested in the state at
the current *input* frame (which is the keyframe), not any possible output.
For reverse playback the same behaviour was already implemented in
flush_parse().
For reverse playback, chain_forward() is only used to gather frames and not
for decoding, and it is actually called by the draining logic, causing an
infinite recursion.
While it's a bit tricky to discard frames *before* decoding (because
we might not be sure which data is needed or not by the decoder), we
can discard them after decoding if they are too late anyway.
Any following basetransform based element or similar would drop the frame too.
When asked to just decode keyframe, if we got a keyframe drain out
the decoder straight away.
This avoids having to wait for the next frame and reduces delay even
more.
https://bugzilla.gnome.org/show_bug.cgi?id=767232
This ensures the decoder is properly drained out when receiving a
DISCONT buffer. The optimal way of doing this would have been to
receive a GAP event before hand but it is not always possible.
Fixes big delays with some decoders (ex gst-libav) that will not
drain out data when only decoding keyframes.
https://bugzilla.gnome.org/show_bug.cgi?id=767232
The base class was setting the DISCONT flag before checking whether the buffer
would be in segment or not.
Fix issues with DISCONT flags not being properly propagated downstream when
decoders buffers were out of segment.
https://bugzilla.gnome.org/show_bug.cgi?id=766800
If the input buffer is after the end of the output buffer, then waiting
for more data won't help. We will never get an input buffer for this point.
This fixes compositing of streams from rtspsrc.
https://bugzilla.gnome.org/show_bug.cgi?id=766422
gst_video_sink_center_rect() can be called without a GstVideoSink
having been instantiated so we can't relly on the video sink
class_init function to init the category.
Fix a warning when running:
GST_CHECKS=test_video_center_rect GST_DEBUG=6 G_DEBUG=fatal_warnings make libs/video.check-norepeat
https://bugzilla.gnome.org/show_bug.cgi?id=766510
We weren't using the result of find_best_format at all.
Also, move the find_best_format usage to the default update_caps() to make
sure that it is also overridable.
https://bugzilla.gnome.org/show_bug.cgi?id=764363
Since the allocation query caps contains memory size and the pad's caps
contains the display size, a video encoder or decoder might need to allocate
a different frame size than the size negotiated in the caps.
This patch splits this logic distinction for videodecoder and videoencoder.
The user if needs a different allocation caps, should set the allocation_caps
in the GstVideoCodecState before calling negotiate() vmethod. Otherwise the
allocation_caps will be the same as the caps set in the src pad.
https://bugzilla.gnome.org/show_bug.cgi?id=764421
The parameter type was wrongly documenting that a GstVideoInfo structure
pointer was needed, while it needs a GstVideoFormatInfo structure
pointer.
https://bugzilla.gnome.org/show_bug.cgi?id=764414
P010 is a YUV420 format with an interleaved U-V plane and 2-bytes per
component with the the color value stored in the 10 most significant
bits.
https://bugzilla.gnome.org/show_bug.cgi?id=761607
---
Changes since v2:
- Set bits=16 in DPTH10_10_10_HI
Changes since v1:
- Fixed x-offset calculation in uv.
- Added 6-bit shifts to FormatInfo.
When caps are already negotiated it should be possible to
select formats other than the one that was negotiated. If downstream
allows alpha video caps and it has already negotiated to a non-alpha
format, caps queries should still return the alpha caps as a possible
format as caps renegotiation can happen.
Includes tests (for compositor) to check that caps queries done after
a caps has been negotiated returns complete results
https://bugzilla.gnome.org/show_bug.cgi?id=757610
libgstreamer currently exports some debug category
symbols GST_CAT_*, but those are not declared in any
public headers.
Some plugins and libgstvideo just use GST_DEBUG_CATEGORY_EXTERN()
to declare and use those, but that's just not right at
all, and it won't work on Windows with MSVC. Instead look
up the categories via the API.
gst_pad_get_allowed_caps() will return NULL if the srcpad has no peer.
In that case, use gst_pad_peer_query_caps() with template caps as filter
to have negotiated output caps properly before forwarding GAP event.
https://bugzilla.gnome.org/show_bug.cgi?id=761218
Allows the subclass to completely override the chosen src caps.
This is needed as videoaggregator generally has no idea exactly
what operation is being performed.
- Adds a fixate_caps vfunc for fixation
- Merges gst_video_aggregator_update_converters() into
gst_videoaggregator_update_src_caps() as we need some of its info
for proper caps handling.
- Pass the downstream caps to the update_caps vfunc
https://bugzilla.gnome.org/show_bug.cgi?id=756207
Add missing ':' to tile_ws and tile_hs fields documentation to avoid
bad render of these two fields, mark reserved bytes as private to hide
field and avoid gtkdoc warning and add parameters description to
documented macro to avoid gtkdoc warnings.
https://bugzilla.gnome.org/show_bug.cgi?id=761132
In gst_video_info_to_caps(), make sure we end up with an RGB matrix for
RGB formats and warn when the GstVideoInfo colorimetry is wrong.
In gst_video_info_from_caps(), fix the GstVideoInfo with an RGB matrix
for RGB formats and warn about inconsistent caps.
See https://bugzilla.gnome.org/show_bug.cgi?id=759624
For RGB formats, the matrix in the colorimetry (conversion from YUV to
RGB) is irrelevant and we should ignore it and assume the identity
transform for everything we do.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=759624
In the case where the stream doesn't have a framerate set and the frames
don't have a duration set, we still want to use the clipping path to
make sure we don't push buffers outside of the segment.
The problem was the previous iteration was setting a duration of 2s, which
meant that any buffer which was less than 2s before the segment start would
end up getting pushed.
Instead, use a saner 40ms (25fps single frame duration) to figure out whether
the frame could be within the segment or not
v210, UYVP and IYU1 are complex formats for which pixel stride does not really
have a meaning. If we copy width*pstride bytes per line, it's not going to do
the right thing. As a fallback, copy stride bytes per line. This might copy
uninitialized bytes at the end of each line, but at least copies the frame.
https://bugzilla.gnome.org/show_bug.cgi?id=755392
We have to queue buffers based on their running time, not based on
the segment position.
Also return running time from GstAggregator::get_next_time() instead of
a segment position, as required by the API.
Also only update the segment position after we pushed a buffer, otherwise
we're going to push down a segment event with the next position already.
https://bugzilla.gnome.org/show_bug.cgi?id=753196
Only accept alpha if downstream has alpha as well. It could
theoretically accept alpha unconditionally if blending is
properly implemented for handle it but at the moment this
is a missing feature.
Improves the caps query by also comparing with the template
caps to filter by what the subclass supports.
https://bugzilla.gnome.org/show_bug.cgi?id=754465
Before we just merged everything in pretty much random ways
ad-hoc instead of keeping state properly. In 0.10 that was
how it worked, but in 1.x the tag events sent should always
reflect the latest state and replace any previous tags.
So save the upstream (stream) tags, and save the tags set
by the decoder subclass with merge mode, and then update
the merged tags whenever either of those two changes.
This slightly changes the behaviour of gst_video_decoder_merge_tags()
in case it is called multiple times, since now any call replaces
the previously-set tags. However, it leads to much more predictable
outcomes, and also we are not aware of any subclass which sets this
multiple times and expects all the tags set to be merged.
If more complex tag merging scenarios are required, we'll have
to add a new vfunc for that or the subclass has to intercept
the upstream tags itself and send merged tags itself.
https://bugzilla.gnome.org/show_bug.cgi?id=679768
Apparently I forgot how gobject works, there is no need to expose
it directly as one can call it from the parent_class pointer
This reverts commit ea9b6a7e3c.
Add gst_video_decoder_set_use_default_pad_acceptcaps() to allow
subclasses to make videodecoder use the default pad acceptcaps
handling instead of resorting to the caps query that is, usually,
less efficient and unecessary
API: gst_video_decoder_set_use_default_pad_acceptcaps
Subclasses can use it to select what queries they want to handle
and forward the rest to the default handling function.
API: gst_video_decoder_sink_query_default
https://bugzilla.gnome.org/show_bug.cgi?id=753623
gstvideoencoder.h:228: Warning: GstVideo: "@transform_meta"
parameter unexpected at this location:
* @transform_meta: Optional. Transform the metadata on ...
Push all pending events before pushing the gap. This ensures the
segment is pushed before the gap so it can be properly translated
to the running time
Includes unit test.
https://bugzilla.gnome.org/show_bug.cgi?id=753360
Before aggregator based elements always started at running time 0,
now it's possible to select the first input buffer running time or
explicitly set a start-time value.
https://bugzilla.gnome.org/show_bug.cgi?id=749966
Add logging categories for most video objects.
Remove some useless debug lines in video-info and videotestsrc.
Add a performance debug line in the video scaler.
This minor update should workaround a build system bug. While the
makefile has been updated to generate more enum type, there is nothing
that updates the header and would lead to the generated code to be
produced again. This minor doc fix should ensure no one get a build with
missing symbols.
E.g.
video-scaler.c: In function 'gst_video_scaler_horizontal':
video-scaler.c:1332:3: error: 'func' may be used uninitialized in this function [-Werror=maybe-uninitialized]
func (scale, src, dest, dest_offset, width, n_elems);
^
video-scaler.c: In function 'gst_video_scaler_vertical':
video-scaler.c:1373:3: error: 'func' may be used uninitialized in this function [-Werror=maybe-uninitialized]
func (scale, src_lines, dest, dest_offset, width, n_elems);
^
GCC's analyses seem to be correct, for the simple fact that if you pass
get_functions a known format, but no hscale or vscale, it'll return
True without having done anything.
Some callers check for the scale values to be not NULL, but then
hscale->resampler.max_taps could return 0.
A different approach to the one presented in this patch is to check
for those max_taps, too, before calling get_functions.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=752051
It's needed to check if pixel-aspect-ratio exists before fixating.
It does not exist if input caps is not set yet and allowed caps
does not contain pixel-aspect-ratio (e.g. when using GST_VIDEO_CAPS_MAKE)
https://bugzilla.gnome.org/show_bug.cgi?id=751932
POOL meta just means that this specific instance of the meta is related to a
pool, a copy should be made when reasonable and the flag should just not be
set in the copy.
The default implementation copies all metadata without tags, and metadata
with only the video tag. Same behaviour as in GstVideoFilter.
This currently does not work if the ::parse() vfunc is implemented as all
metas are getting lost inside GstAdapter.
https://bugzilla.gnome.org/show_bug.cgi?id=742385
CLAMP checks both if value is '< 0' and '> max'. Value will never be a negative
number since it is a division of an unsigned integer (i). Removing that check
and only checking if it is bigger than max and setting it appropriately.
CID #1308950
The problem here was that after removing the formats and
all the things we could convert, we then intersected these
caps with the template caps.
Hence if a subclass offered permissive sink templates
(eg all the possible formats videoconvert handles), but only
one output format, then at negotiation time getcaps returned
caps with the format restricted to that format, even though
we do handle conversion.
https://bugzilla.gnome.org/show_bug.cgi?id=751255
Add a utility function that, given a video size and a
packed stereoscopic mode, attempts to guess if the video
is packed at half resolution per view or not, since
very few videos provide the information.
We need to scale groups of 4 bytes for YUY2 formats so round up to 4.
It's possible that there is no Y byte for the last pixel so make sure
we clamp correctly.
When set, it causes videoaggregator to repeatedly aggregate the last buffer on
an EOS pad instead of skipping it and outputting silence. This is useful, for
instance, while playing back files seamless one after the other, to avoid
videoaggregator ever outputting silence (the checkerboard pattern).
It is to be noted that if all the pads on videoaggregator have this property set
on them, the mixer will never forward EOS downstream for obvious reasons. Hence,
at least one pad with 'ignore-eos' set to FALSE must send EOS to the mixer
before it will be forwarded downstream.
https://bugzilla.gnome.org/show_bug.cgi?id=748946
When copying info from the reference input state, duplicate
all the fields of the video info. The sub-class will have the
chance to override them later.
Add flags and enums to support multiview signalling in
GstVideoInfo and GstVideoFrame, and the caps serialisation and
deserialisation.
videoencoder: Copy multiview settings from reference input state
Add gst_video_multiview_* support API and GstVideoMultiviewMeta meta
https://bugzilla.gnome.org/show_bug.cgi?id=611157
Add preserve_update_caps_result boolean on the class to allow
sub-classes to disable videoaggregator removing sizes and framerate
from the update_caps() return result.
A return value of GST_FLOW_OK with a NULL buffer from get_output_buffer()
means the sub-class doesn't want to produce an output buffer, so
skip it.
If gst_videoaggregator_do_aggregate() generates an error, make sure
to propagate it - don't just ignore and discard the error by
over-writing it with the gst_pad_push() result.
Instead of returning the first video meta found on a buffer, return the
one with the lowest id (which is usually the same thing, except on
multi-view buffers)
All those where super straight forward from the warnings gtkdoc prints. It kind
of makes sense to apply them before the list of warnings is >100 and people
complain that gtkdoc is noisy.
GST_VIDEO_CONVERTER_OPT_ALPHA_MODE, GST_VIDEO_CONVERTER_OPT_CHROMA_MODE,
GST_VIDEO_CONVERTER_OPT_MATRIX_MODE, GST_VIDEO_CONVERTER_OPT_GAMMA_MODE and
GST_VIDEO_CONVERTER_OPT_PRIMARIES_MODE were G_TYPE_STRING with only a few valid
options. Changed those to real enums.
https://bugzilla.gnome.org/show_bug.cgi?id=749104
2 second frame duration is rather unlikely... but if we don't clip
away buffers that far before the segment we can cause the pipeline to
lockup. This can happen if audio is properly clipped, and thus the
audio sink does not preroll yet but the video sink prerolls because
we already outputted a buffer here... and then queues run full.
In the worst case we will clip one buffer too many here now if no
framerate is given, no buffer duration is given and the actual
framerate is less than 0.5fps.
Fixes seeking on HLS/DASH streams, when seeking into the middle of
fragments and having no framerate/buffer duration.
This will be useful for elements that wish to post unhandled navigation
events on the bus to give the application a chance to do something with
it
https://bugzilla.gnome.org/show_bug.cgi?id=747245
Take into account the different steps between Y and UV when calculating
the line size for vertical resampling or else we might not resample
enough pixels and leave bad lines.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=747790
Otherwise we would forward the GAP event without ever providing any caps,
which then would make decodebin expose a srcpad without any caps set. That's
confusing for applications and can lead to all kinds of interesting bugs.
Instead do the same as already is done in GstAudioDecoder, and try to invent
caps based on the sinkpad caps and the caps allowed by downstream and the
srcpad template caps.
https://bugzilla.gnome.org/show_bug.cgi?id=747190
This would've also triggered if for some reason the segment was updated
in such a way that PTS went backwards, but the running time increased. Like
what happens when non-flushing seeks are done.
We're doing a proper buffer-from-the-past check a few lines below based on the
running time, which is the only time we should care about here.
memcmp will blindly compare the reserved fields, as well as any
padding the compiler may choose to sprinkle in GstSegment.
Fixes valgrind complaints in unit tests, as well as some found via
https://bugzilla.gnome.org/show_bug.cgi?id=738216
And keep on querying upstream until we get a reply.
Also, the _get_latency_unlocked() method required being calld
with a private lock, so removed the _unlocked() variant from the API.
And it now returns GST_CLOCK_TIME_NONE when the element is not live as
we think that 0 upstream latency is possible.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
Based on patch from Mozzhuhin Andrey <nopscmn at gmail.com>
Add a table based matrix8 multiplication implementation. The algorithm
does not do any clipping so we need to make sure we never call this on
input that might need to be clipped. In general, this algorithm is
2 times faster than the orc optimized one and would be chosen for all
RGB -> YUV conversions and some YUV->YUV and RGB->RGB conversions.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=732186
In case the original caps were missing some optional fields like
interlace-mode. We assume default values for those everywhere,
but they can still cause negotiation to fail if a downstream element
expects the field to be there and at a specific value.
When we are using the fast linear resampler, use the ->inc to calculate
the first and last pixel we need so that we can do vertical resampling
on the right amount of pixels.