There was a typo in the extension name which resulted in the modifiers
to never be set when doing DMABuf import. That triggered the modifiers
lookup in Intel driver, which was in fact hiding bugs in the gldownload
to glupload path when doing DMABuf.
Note, this changes breaks pipeline the following pipeline on Intel and
some other drivers:
gltestsrc ! gldownload ! video/x-raw\(memory:DMABuf\) ! glimagsink
A fix for this was added to Mesa recently:
Related to https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1338
Fixes 5d0e191710
We don't support modififers and that would result in bad image being
displayed. Note that this was fixes recently in Mesa MR 1138, prior to
that, the reported modifier is always 0, which makes this change a
no-op.
Fixes#441
Related to https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1338
This is possibly not strictly needed when pixels are being downloaded to
CPU memory, but would cause issue when exporting DMABuf, as the data may
not be yet ready when the DMABuf reaches the consumer.
There are in the wild (mp4) streams that basically contain no tracks
but do have a redirect info[0], in which case, qtdemux won't be able
to expose any pad (there are no tracks) so can't post anything but
an error on the bus, as:
- it can't send EOS downstream, it has no pad,
- posting an EOS message will be useless as PAUSED state can't be
reached and there is no sink in the pipeline meaning GstBin will
simply ignore it
In that case, currently the application could try to handle that but it
is pretty complex as it will get the REDIRECT message on the bus at
which point it could set the URL but playbin will ignore it, as
it will only be for the next EOS, it thus need to set the pipeline to
NULL (READY won't do as it is already in READY at that point). And it
needs to figure out the following ERROR message on the bus needs to be
ignored, which is not really simple.
The approach here is to allow element to add details to the ERROR
message with a `redirect-location` field which elements like playbin handle
and use right away.
We could also use the element 'redirect' message in playbin, but the
issue with that approach is that the element will still emit the ERROR
message on the bus, leading to wrong behaviour. That can't be avoided
since in the case the app/parent pipeline is not handling the redirect
instruction, the ERROR message is necessary (and there is no way to
detect that the message has been "handled" from the element emitting the
redirect).
[0]: http://movietrailers.apple.com/movies/paramount/terminator-dark-fate/terminator-dark-fate-trailer-2_480p.mov
It's either this or replacing all the object lock usage in gldisplay
with a recursive mutex which is not backwards compatible
The failure case is effectively:
1. The user has locked the display object lock
2. a glcontext loses it's last ref and attempts to quit the window
3. gst_gl_window_quit() attempts to remove the window from the display
4. gst_gl_display_remove_window attempts to take the display object lock
The only concern with changing the locking for the window list in the
display is that gst_gl_display_create_window() has documentation requiring
the object lock to be held which must continue to work correctly.
Returning a transfer none value for a value checked by a lock is not
thread safe as the reference could disappear before the caller can take
its reference.
Following the [design document] encodebin needs to handle sources that
output multiple streams, for that purpose and to make it simpler,
we ensure that a single segment is outputted to the encoders by using
an `identity single-segment=true` at the beginning of streams chains.
Added API to enable or disable the use of that new feature.
Added support for the encoding profile parser for that new property,
keeping backward compatibility
[design document]: https://gstreamer.freedesktop.org/documentation/additional/design/encoding.html?gi-language=c#rendering-timelines
Setting the property again after it had already been set ran
g_value_unset() but did not initialize it again to g_value_copy() failed
afterwards. Removed the unset as cleanup is done implicitely from
g_value_copy().
Changing the mix-matrix property did not trigger reconfiguration of the
caps, this has been added.
If the matrix is set to an empty matrix, instead of copying this the
matrix is simply disabled by setting mix_matrix_is_set (formerly
mix_matrix_was_set) to FALSE so the mix-matrix is ignored from now on.
Previously this would've only set discont=TRUE and then for all future
buffers simply returned immediately.
Instead we also need to
a) drain previous input until its buffer time
b) update next_ts and base_ts accordingly for the gap
c) actually store the new buffer after the gap so it can be used in
the future and so the old buffer before the gap is gone
Also update the unit test accordingly so that it actually tests for this
behaviour. Previously it only tested that after the gap we got no output
at all.
I'm going to use this new API in gst-omx so an encoder can request
v4l2src to produce buffers matching the encoder stride and slice heights
preventing copies of incoming buffers.
Especially for interlaced input make sure to
a) never mix both fields
b) never read lines after the end of the input frame
c) allocate enough space in the temporary lines to not write outside
the allocated memory area
This fixes various memory corruptions and rescaling artefacts.
At the moment, we only posted QoS messages when frame_drop() was
called, but not in finish_frame() when QoS triggered a late push.
This should fix applications that tries to account the dropped
frames. We also emit a warning on drops so it's more clear what is
happening.
Use the new API to tell buffer consumers about alignment details.
This change is backward compatible as non ported elements can safely
ignore the alignment information and keep processing buffers as they use
to, copying if necessary.
By adding this field, buffer producers can now explicitly set the exact
geometry of planes, allowing users to easily know the padded size and
height of each plane.
GstVideoMeta is always heap allocated by GStreamer itself so we can
safely extend it.
When using gst_video_info_align() user had no easy way to retrieve the
padded size and height of each plane.
This can easily be implemented in fill_planes() as it's already called
in align() with the padded height.
Ideally we'd add a plane_size field to GstVideoInfo but the remaining
padding is too small so that would be an ABI break.
Fix#618
We want to round up when halfing height.
I do have a test for this but it relies on my new video-align tests so
it's part of the next commit. Recording the fix separately if we want to
backport this fix to the stable branch.
Similar to gst_video_info_from_caps() which allows encoded video format,
don't error gst_audio_info_from_caps() with encoded audio format.
Because gst_audio_info_set_format() supports encoded format, current
behavior does not seem to be consistent.
We need to provide twice as many lines as usual to the scaling function
as every second lines would be skipped.
Without this we read from random memory and produce colorful output and
crashes.
Without this, scaling e.g. interlaced UYVY causes corrupted output with
lines as follows: f1 f1 f2 f2, i.e. two lines of each field and only
then the other field.
The watch->messages_bytes is not decreased when the write operation
from the backlog is only partly successfull.
This commit decreases the watch->messages_bytes for the successfully
sent messages.
Fixes#679
This can be made to work in certain circumstances when
cross-compiling, so default to not building g-i stuff
when cross-compiling, but allow it if introspection was
enabled explicitly via -Dintrospection=enabled.
See gstreamer/gstreamer#454 and gstreamer/gstreamer#381.
Y210 is a 10-bit YUY2, so we may re-use the YUY2 shaders but gl format
is set to RG16
Sample pipeline:
gst-launch-1.0 videotestsrc ! video/x-raw,format=Y210 ! glimagesink
NV16/NV61 is basically the same as NV12/NV21 with a higher chroma resolution.
Since only the size of the UV plane/texture is different, the same shaders are used as for NV12/NV21.
When checking the behaviour of live seeking on audiomixer or
adder we don't *really* need real audio devices. audiotestsrc
in live mode is enough to test the behaviour of those elements.
Also avoids people repeatedly wasting hours trying to figure out
whether that failing behaviour is due to their code or not.
This is done by reusing `gst_gl_memory_setup_buffer` avoiding to
duplicate code.
Without a VideoMeta, mapping those buffers lead to GstBuffer mapping the
buffer in system memory even when specifying the GL flags (through the
buffer merging mechanism) making the result totally broken.
In !427, I removed the call to get_devices in order to always
print added devices from the bus handler, however this requires
the main loop to run until all pending messages have been consumed.
This commit achieves this by always running the main loop, and
simply adding an idle source to quit it in the non --follow case.