5 seconds might not be enough value for timeout in case an application
is running on a device with very limited computing power.
Note that ANGLE uses 10 seconds timeout value. So even if a timeout
happens here, it's also ANGLE's timeout condition as well
(meaning that bad things will happen either way)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/769>
It was not working properly and the implementation of the smartencoder
element was weird. This introduce a number of changes (which are all
in one single commit because they basically all work together and lead
to basically reimplementing the element):
* Make smartencoder a bin so that the reencoding chain of elements are
inside of it instead of not having any parent. Those elements were not
be visible when dumping the pipeline which was very confusing.
* Make encodebin create the right encoder with a capsfilter (and parser)
to properly enforce the format specified by the user, and so that the
encoder properties specified in the encoding profile are respected.
* Use `decodebin` to do the decoding instead of selecting a decoder
ourself and not plug any parser etc...
* Ensure that negotiated format in the sinkpad of smart encoder is fixed
through time when the user requested a non dynamic output
* Add a parser at the beginning of the smart encoder
* Handle errors when reencoding
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/751>
This adds linear 32x32 NV12 based tiles. This format is notably used by
Allwinner VCU and exposed in V4L2 as being "SUNXI Tiled" format. In this
patch we generalize the plane info calculation so we can share this part
with the 4L4 variant.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/754>
All UI elements will follow Single-Threaded Apartments (STA) model.
As a result, we should access them from dedicated UI thread.
Due to the nature of the threading model, ANGLE will wait the UI
thread while closing internal window/swapchain objects.
A problem here is that when destroying GstGLWindow from the UI thread,
it will wait GstGLContext's internal thread. Meanwhile, the GstGLContext's
internal thread will be blocked because ANGLE wants to access the UI thread.
That will cause a deadlock or exceptions.
In short, application should not try to call
gst_element_set_state(pipeline, GST_STATE_NULL) from a UI thread.
That's a limitation of current implementation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/745>
Add property "handle-segment-change" for user to allow pushing
custom segment event. For now, this property can work only for
time format GstSegment.
This property can be useful in case application controls timeline
of stream such as there is timestamp discontinuity but playback is
expected to be continuous. Multi-period scenario of MPEG-DASH is an
example of this use case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/663>
Since we are using VideoMeta, the converter (similarly to the video_frame_copy
utility) should have no issue dealing with frames that are slightly larger.
This situation occure as some element will use padded width/height for
allocation, which results in a VideoMeta width/height being larger then the
display width/height found in the negotiated caps.
Fixes#790
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/747>
For example, BT709, BT601, and BT2020_10 all have theoretically
different transfer functions, but the same function in practice. In
these cases, we should use the fast path for negotiating. Also,
BT2020_12 is essentially the same as the other three, just with one more
decimal point, so it gives the same result for fewer bits. This is now
also aliased to the former three.
Also make videoconvert do passthrough if the caps have equivalent
transfer functions but are otherwise matching.
As of the previous commit, we write the correct transfer function for
BT601, instead of the (functionally identical but different ISO code)
transfer function for BT709. Files created using GStreamer prior to that
commit write the wrong transfer function for BT601 and are, strictly
speaking, 2:4:5:4 instead. However, this commit takes care of
negotiation, so that conversions from/to the same transfer function are
done using the fast path.
Fixes#783
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/724>
If the cropping or scaling input or output rects put us completely
outside the input/output frame respectively, we can't draw anything
except black safely. Check for those conditions and don't set up a
configuration that attempts to access out of bounds memory outside
the input/output framebuffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/696>
If the frames passed in to gst_video_converter_frame()
have a different layout than was configured for, the
conversion code might go out of bounds and crash.
Do a sanity check on each frame passed in, and in the
absence of a return value in the API, just
refuse the conversion in invalid cases and leave the
destination frame untouched so it's obvious to
users that it was broken.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/696>
This patch was taken from #629#note_178766, the comment made
at the time was:
The root issue is a mismatch between the initialization of render_rect
in GstGLWindowX11Private and what's expected in the draw_cb function.
Because render_rect is not explicitly initialized to a width and height
of -1 (unlike gstglwindow_wayland_egl.c which does initialize to -1),
the less-than check for explicitly-set render_rect at gstglwindow_x11.c:453-454
always fails, even when the parent_win has been set and the render rectangle
has never been set.
Maybe this came from copying the similar check in the wayland code? Regardless,
I think the correct inequality should be '<= 0' (on both lines).
Alternatively initialization could be changed, but other sinks, e.g.
xvimagesink don't appear to use -1 to mean "unset" render_rect this way.
The issue can be reproduced by running the example in
tests/examples/gl/qt/videooverlay/ on X11, and resizing the output
window
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/701>
Previously we only handled one event at a time, which could lead to the
following two suboptimal situations:
- frame 0 at 20ms, frame 1 at 40ms and two force-keyunit events at 10ms
and 15ms. We would create a new keyframe for both of the frames.
- 100 force-keyunit events with running-time NONE would cause all
following 100 frames to be made into a keyframe.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/684>
Also not optimal but at least simplifies the code a bit and doesn't
require g_list_length() and g_list_append() in a few places.
For 2.0 there are some more candidates to change but unfortunately
they're currently part of the API.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/683>
This means we cannot access [view layer] or view.bounds from the OpenGL
thread. This also means that we need to call the main thread when
setting the window handle. However, we cannot perform that
synchronously as that may deadlock with the application performing the
set_window_handle() call.
We need to defer the actual update and run it asynchronously and wait
for the window handle update internally at each point it is needed.
Fixes: https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/issues/372
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/681>
The gir generator wrongly assume that the vfunc
GstGLFilterClass.filter() and the method gst_gl_filter_filter_texture()
are related. As a result it complains about not matching argument names.
Workaround this by naming both of their arguments input and output.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/664>
the old manner does not consider the profile idc. The profile idc should
play an more important role in recognizing the profile than the other
information. And there is no need to mix profiles of different extensions
together to find the closest profile when the bits stream is not standard,
different extensions support different features and should not be mixed.
The correct way should be recognize the extension category by profile idc
firstly, and then find the closest profile.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/645>
GstH265FormatRangeExtensionProfile declares the common bits used
for not only format range extensions profiles, but also for several
different h265 extension profiles, such as high throughput, screen
content coding extensions, etc. And So the old name is not proper.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/645>
If there are two elements and threads attempting to query each other for
an OpenGL context. The locking may result in a deadlock.
We need to unlock each element's context_lock when querying another
element for the OpenGL context in order to allow any other element to
take the lock when the other element is querying for an OpenGL context.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/642>
When receiving an instant-rate-change event, store the updated
seek flags and replace the flags in any input segments with them
to allow for instant switching between trickmodes and not.
When receiving an instant-rate-change event, store the updated
seek flags and replace the flags in any input segments with them
to allow for instant switching between trickmodes and not.
It's possible that a buffer might be within the segment proper,
but not within the "valid" part we're playing, which is only
things after the 'offset' part of the segment. In that case,
the running-times of the buffer-start and buffer-stop will be
GST_CLOCK_TIME_NONE, and we'd better not schedule playback that
far in the future.
This commit modifies GstVideoMasteringDisplayInfo and GstVideoContentLightLevel
structs so that each value is to be more like hdr_metadata_infoframe struct
of linux drm header and DXGI_HDR_METADATA_HDR10 struct of Windows.
So each value is no more fraction but normalized one as per CTA 861.G spec.
Also the unit of each value will be consistent with H.264, H.265
specifications, hdr_metadata_infoframe struct for linux and
DXGI_HDR_METADATA_HDR10 struct for Windows.
[38/1301] Generating GstVideo-1.0.gir with a custom command.
../subprojects/gst-plugins-base/gst-libs/gst/video/gstvideoaggregator.c:231: Error: GstVideo: identifier not found on the first line:
*
^
Because the color value is stored in MSB, so we can reuse the
Y210 code for P012_LE / P012_BE
Sample pipeline:
gst-launch-1.0 videotestsrc ! video/x-raw,format=Y212_LE ! glimagesink
This can be used to have compositor display either the background
or a stream on a lower zorder after a live input stream freezes
for a certain amount of time, for example because of network
issues.
gst_gl_window_quit() will attempt to send a message but will be called
from GstGLContext's finalize handler and so the weak ref that backs
gst_gl_window_get_context will return NULL as it has already been
cleared. We need that context in send_message_async to decide whether
to run the provided callback immediately or queue in GCD
Without this fix, it is possible that outbuf is not initialized, which
will result in segfault when call gst_buffer_replace (&outbuf, NULL). In
addition, the patch fixes potential memory leak in restart path.
The segfault can be reproduced by the pipeline below:
GST_GL_PLATFORM=egl \
gst-launch-1.0 videotestsrc ! msdkh265enc ! msdkh265dec ! \
'video/x-raw(memory:DMABuf)' ! glimagesink
In the situation that the direct dmabuf path is chosen, but with an
unsupported texture format, this causes accept to fail rather than
continue and fail at the upload stage. It is also possibly necessary to
reconfigure after falling back from direct to non-direct dmabuf upload
paths.
The wordlen ("length") MUST represent the total "number of 32-bit words
in the extension, excluding the four-octet extension header" (rfc3550).
There are cases where already existent padding is reused for adding
the new extension. So the new wordlen should be updated if the new
added extension makes it to increase.
This patch introduces a new API to send and parse mouse scroll events. Mouse
event coordinates are sent relative to the display space of the related output
area. This is usually the size in pixels of the window associated with the
element implementing the GstNavigation interface.
This patch introduces a property which, if set to FALSE, prevents RTP
basepayloader from scaling the RTP time when a segment's rate is not
equal to 1.0. The specification is ambiguous on this subject and some
clients expect the timestamps not to be scaled.
This allows us to remove races when setting the wl_queue on wayland
objects with wl_proxy_set_queue() as each created object is created with
the queue already set.
We can also move all our initilization code into the window as we
can retrieve all wayland objects from each window instance. This
removes a possible race when integrating with external API's as we would
always attempt to immediately retrieve a small set of wayland objects.
That is no longer the case with the objects from each window instance.
Automatic negotiation of texture-target=external-oes does not work
without separating the external-oes support out of the DirectDmabuf
uploader into a separate DirectDmabufExternal uploader.
gst_gl_upload_transform_caps() is missing a NULL pointer check in case
the current upload method's transform_caps returns a NULL pointer. In
the following loop over all upload methods, NULL pointer return values
are already handled correctly.
Some drivers support directly importing DMA buffers in some formats into
external-oes textures only, for example because the hardware contains
native YUV samplers.
Note that in these cases colorimetry can only be passed as hints and
there is no feedback whether the driver supports the required YUV
encoding matrix and quantization range.
Allow creating EGL images from DMA buffers in formats that the driver
only supports for the external-oes texture target.
Pass the intended texture target to gst_egl_image_from_dmabuf_direct so
that _gst_egl_image_check_dmabuf_direct can decide whether to create an
EGL image for a format that can only be targeted at external-oes
textures by the driver. Allow creating GstGLMemoryEGL objects from these
DMA buffers.
The GST_VIDEO_BUFFER_FLAG_TOP_FIELD flag is a superset of
GST_VIDEO_BUFFER_FLAG_BOTTOM_FIELD as they are defined using other
flags. As a result we can't use GST_BUFFER_FLAG_IS_SET() to check for
those flags.
ANGLE_surface_d3d_render_to_back_buffer extension is only available
with Microsoft fork of ANGLE. Note that Microsoft's ANGLE repository
has been deprecated.
Previously we would simply use them without any locking at all, while
using the object lock for setting them. Nothing prevented new callbacks
to be set in the meantime, potentially calling a callback with already
freed user_data.
To prevent this move the callbacks into a reference counted struct and
use the appsrc/appsink mutex to protect access to it, which is used in
all functions calling the callbacks already anyway.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/729
By setting the extension-ID for TWCC (Transport Wide Congestion Control),
the payloader will embed sequencenumbers as a RTP header-extension
according to https://tools.ietf.org/html/draft-holmer-rmcat-transport-wide-cc-extensions-01#section-2
The negotiation of this being enabled with downstream elements
is done with caps reflecting the way this is communicated using SDP.
With commit "basepayload: Expose onvif-no-rate-control property" the rtp
timestamp changed behaviour when rate control is disabled.
When disabling rate control, we must take care of the stream time to
avoid the timestamps to begin from zero again.
This simply implies not trying to "prepare" those buffers,
as mapping an empty buffer to a video frame does not make
much sense.
This also adds a simple test in compositor that performs
some trivial checking of the handling of gap events, the test
was not failing before, but an error was logged, this is
no longer the case.
Fixes#717
If there's no known value in the best caps then the functions to convert
them to strings will return NULL. Having the fields not in the caps is
not a problem, having them with a NULL value however will cause
negotiation failures.
Save push/push_list helper flow return and in case of failure, return it
in the process function. This allow forwarding downstream flow return
even if the subclass is using the push/push_list helper.
In prepare_frame, it is not enough for the target info
(conversion_info) to not have changed to decide not to update
the converter, as the vpad info may have changed as well.
Fixes#714
Posting any message to parent seems to be pointless. That might break
parent window.
Regardless of the posting, parent window can catch mouse event
and also any keyboard events will be handled by parent window by default.
Fixes: https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/634
This marker is optional, its name refer to RTP marker bit. This mark can
be use to reduce latency in various use cases. With the split between
finish_frame() and finish_subframe() we will now be able to identitfy
the last subframe with no latency.
In order to detail the use of GST_BUFFER_FLAG_MARKER in a video
use case, the flag GST_VIDEO_BUFFER_FLAG_MARKER has been introduced
with a proper documentation clarifying marker's role.
Introduce a new API so encoders can split the encoding in subframes.
This can be useful to reduce the overall latency as we no longer need to
wait for the full frame to be encoded to start decoding or sending it.
The matrices were in the wrong order.
Instead of the conversion matrix being
_ XYZ_TO_RGB_output * RGB_TO_XYZ_input * input_RGB
It was
_ RGB_TO_XYZ_input * XYZ_TO_RGB_output * input_RGB
This function might be revisited with different channel position mapping
while audio source goes into play so the reorder flag needs to be reset
before the checks happen.
Instead initialize the map infos, etc to NULL like gst_buffer_map()
would be doing on a zero-sized buffer.
This fixes a crash in audioresample if the first output buffer would
contain zero samples.
There was a typo in the extension name which resulted in the modifiers
to never be set when doing DMABuf import. That triggered the modifiers
lookup in Intel driver, which was in fact hiding bugs in the gldownload
to glupload path when doing DMABuf.
Note, this changes breaks pipeline the following pipeline on Intel and
some other drivers:
gltestsrc ! gldownload ! video/x-raw\(memory:DMABuf\) ! glimagsink
A fix for this was added to Mesa recently:
Related to https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1338
Fixes 5d0e191710
We don't support modififers and that would result in bad image being
displayed. Note that this was fixes recently in Mesa MR 1138, prior to
that, the reported modifier is always 0, which makes this change a
no-op.
Fixes#441
Related to https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1338
It's either this or replacing all the object lock usage in gldisplay
with a recursive mutex which is not backwards compatible
The failure case is effectively:
1. The user has locked the display object lock
2. a glcontext loses it's last ref and attempts to quit the window
3. gst_gl_window_quit() attempts to remove the window from the display
4. gst_gl_display_remove_window attempts to take the display object lock
The only concern with changing the locking for the window list in the
display is that gst_gl_display_create_window() has documentation requiring
the object lock to be held which must continue to work correctly.
Returning a transfer none value for a value checked by a lock is not
thread safe as the reference could disappear before the caller can take
its reference.
Following the [design document] encodebin needs to handle sources that
output multiple streams, for that purpose and to make it simpler,
we ensure that a single segment is outputted to the encoders by using
an `identity single-segment=true` at the beginning of streams chains.
Added API to enable or disable the use of that new feature.
Added support for the encoding profile parser for that new property,
keeping backward compatibility
[design document]: https://gstreamer.freedesktop.org/documentation/additional/design/encoding.html?gi-language=c#rendering-timelines
I'm going to use this new API in gst-omx so an encoder can request
v4l2src to produce buffers matching the encoder stride and slice heights
preventing copies of incoming buffers.
Especially for interlaced input make sure to
a) never mix both fields
b) never read lines after the end of the input frame
c) allocate enough space in the temporary lines to not write outside
the allocated memory area
This fixes various memory corruptions and rescaling artefacts.
At the moment, we only posted QoS messages when frame_drop() was
called, but not in finish_frame() when QoS triggered a late push.
This should fix applications that tries to account the dropped
frames. We also emit a warning on drops so it's more clear what is
happening.
By adding this field, buffer producers can now explicitly set the exact
geometry of planes, allowing users to easily know the padded size and
height of each plane.
GstVideoMeta is always heap allocated by GStreamer itself so we can
safely extend it.
When using gst_video_info_align() user had no easy way to retrieve the
padded size and height of each plane.
This can easily be implemented in fill_planes() as it's already called
in align() with the padded height.
Ideally we'd add a plane_size field to GstVideoInfo but the remaining
padding is too small so that would be an ABI break.
Fix#618
We want to round up when halfing height.
I do have a test for this but it relies on my new video-align tests so
it's part of the next commit. Recording the fix separately if we want to
backport this fix to the stable branch.
Similar to gst_video_info_from_caps() which allows encoded video format,
don't error gst_audio_info_from_caps() with encoded audio format.
Because gst_audio_info_set_format() supports encoded format, current
behavior does not seem to be consistent.
We need to provide twice as many lines as usual to the scaling function
as every second lines would be skipped.
Without this we read from random memory and produce colorful output and
crashes.
Without this, scaling e.g. interlaced UYVY causes corrupted output with
lines as follows: f1 f1 f2 f2, i.e. two lines of each field and only
then the other field.
The watch->messages_bytes is not decreased when the write operation
from the backlog is only partly successfull.
This commit decreases the watch->messages_bytes for the successfully
sent messages.
Fixes#679
Y210 is a 10-bit YUY2, so we may re-use the YUY2 shaders but gl format
is set to RG16
Sample pipeline:
gst-launch-1.0 videotestsrc ! video/x-raw,format=Y210 ! glimagesink
NV16/NV61 is basically the same as NV12/NV21 with a higher chroma resolution.
Since only the size of the UV plane/texture is different, the same shaders are used as for NV12/NV21.
This is done by reusing `gst_gl_memory_setup_buffer` avoiding to
duplicate code.
Without a VideoMeta, mapping those buffers lead to GstBuffer mapping the
buffer in system memory even when specifying the GL flags (through the
buffer merging mechanism) making the result totally broken.
The newly exposed vmethods are pause, resume, stop and clear_all.
The existing reset vmethod is deprecated.
The audio sink will fallback to calling reset if pause or stop
are not provided and will fallback to calling start if
resume is not provided. There is no default clear_all
implementation.
Existing audio sinks continue to work as before.
This change is useful for sinks that need to distinguish
between a pause and a stop (currently both are handled
by a reset) and is needed for https://bugzilla.gnome.org/show_bug.cgi?id=788362https://bugzilla.gnome.org/show_bug.cgi?id=788361
Universal Windows Platform apps are not allowed to use LoadLibrary to
load arbitrary DLLs from the filesystem. They can only use
LoadPackagedLibrary to load DLLs that have been packaged with the app
as assets.
See also: https://gitlab.freedesktop.org/gstreamer/gstreamer/merge_requests/190
If the remainder is not evenly divisable by 4, we'd miss the check
for zero and continue the loop until crashing. Change the branch
to take into account negatives as well.
This more closely matches the SSE loop.
We are using ARC to cleanup after ourselves.
../gst-libs/gst/gl/cocoa/gstglwindow_cocoa.m:159:20: error: unused variable 'queue' [-Werror,-Wunused-variable]
dispatch_queue_t queue = (__bridge_transfer dispatch_queue_t) window->priv->gl_queue;
^
Matroskademux will send gap event when lag of video and audio is over 3 seconds.
audiodecoder needs to handle gap event and set default output caps.
Only audio info is set, while output caps is ignored. This cause the assertion failed.
Need to fill output caps in gst_audio_decoder_negotiate_default_caps() with
negotiated caps to avoid critical info printed when check it later.
This is needed for using GstGL with ANGLE as the GLES implementation
in Universal Windows Platform apps that use the Windows Runtime
(WinRT) instead of Win32, which is deprecated and not allowed in
Windows Store apps.
This has been tested with Servo on the Microsoft HoloLens 2, and seems
to work quite well.
By passing NULL to `g_signal_new` instead of a marshaller, GLib will
actually internally optimize the signal (if the marshaller is available
in GLib itself) by also setting the valist marshaller. This makes the
signal emission a bit more performant than the regular marshalling,
which still needs to box into `GValue` and call libffi in case of a
generic marshaller.
Note that for custom marshallers, one would use
`g_signal_set_va_marshaller()` with the valist marshaller instead.
Simple addition for supporting EXT_platform_device typed display.
It's a kind of special display type (part of EGL specification)
which has no window at all.
To use EGLDevice explicitly, set environment "GST_GL_WINDOW=egl-device"
See also https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_platform_device.txt
* Fix typo
s/nunormalized/normalized/g
* Update GstVideoMasteringDisplayInfo description
Each values are not array.
* Add missing newline between arguments description and
detailed comment.
The gltestsrc element was refactored to inherit from this base class which
handles the GL context. The sub-class only needs to implement the gl_start,
gl_stop and fill_gl_memory vfuncs, along with properly advertizing the GL APIs
it supports through the supported_gl_api GstGLBaseSrc class attribute.
The caps and thus the video info have preference. If the field order is
set in there then it applies to all frames.
This works around issues where the tff field order is only set in the
caps but not additionally in the buffer flags.
Commit c71dd72b "gl/wayland: fix glib mainloop integration" was overeager
in removing the poll result test from the check function. This caused
dispatch to be called even if no new events are available on the
Wayland connection, which in turn would wake up the glib mainloop,
causing effectively a tight loop without ever blocking on the poll.
Fixes#603
basedepayload generates its own segment in a pretty unconventional
manner, relying on information in the caps such as npt-start or
npt-stop, usually set by rtspsrc.
In ONVIF mode, rtspsrc will generate the correct segment and this
logic in rtpbasedepayload will not be needed, this commit allows
rtspsrc to signal that through the caps.
While we can convert between all formats apart from the rate, we
actually need to make sure that we comply with a) the rate of the first
configured pad and b) also all the allowed rates from downstream.
We were previously only fixating the rate in the getcaps
implementation when downstream was requiring a discrete value,
causing negotiation to fail when upstream was capable of rate
conversion, but not made aware that it had to occur.
Instead of fixating the rate, we can simply update our sink
template caps with whatever GValue the downstream caps are holding
as their rate field.
Allows negotiation to successfully complete with pipelines such as:
audiotestsrc ! audio/x-raw, rate=48000 ! audioresample ! audiomixer name=m ! \
audio/x-raw, rate={800, 1000} ! autoaudiosink \
audiotestsrc ! audio/x-raw, rate=44100 ! audioresample ! m.
... and also as known as ITU-T H.273.
The conversion has been handled per plugin for now. That causes
code duplication a lot also some plugins might not be updated with newly introduced
color{matrix,transfer,primaries} enum value(s).
Instead of handling it per plugin, centralized handling can remove such
code duplication and make plugins be up-to-dated.
The extmap attribute allows mapping RTP extension header IDs to
well-known RTP extension header specifications. See RFC8285 for details.
We store the extmap attribute either as string in the caps
extmap-X=extensionname
where X is the integer extension header ID, or as 3-tuple of strings
extmap-X=<direction,extensionname,extensionattributes>
where direction or extensionattributes are allowed to be the empty
string.
Both formats are allowed because usually only the extension name is
given and it's much simpler to handle in caps.
We use this property in gst_gl_display_egl_from_gl_display, to set
foreign_display for the new GstGLDisplayEGL instance. This fixes a
problem where gst_gl_display_egl_finalize calls EglTerminate on a
pre-existing EGL connection.
It seems that eglCreatePlatformWindowSurfaceEXT is failing (with
EGL_BAD_ALLOC) because it thinks an EGL surface has already been created
for the wl_egl_window. The reason is that the "driver_private" field of
the wl_egl_window is getting clobbered by the function
wl_proxy_set_queue().
Since a wl_egl_window is not a wl_proxy, it shouldn't be passed to
wl_proxy_set_queue(). It just wraps a wl_surface (which is a wl_proxy).
And it looks like the queue for that surface is getting set earlier on
in the function anyway.
See https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/621#note_184582
Body_offset mean that so much data have been written.
Without this patch n_vectors somtimes becomes one more than it should
and then there will be an vector that have a random size causing
writev_bytes to cause a "Bad address" error.
This patch fixes the following critical warning:
CRITICAL **: 11:33:32.843: Unknown GL format 0x0 provided
It would happen during the setup of a second pipeline involving the DMABuf
uploader, typically with a v4l2src element. The warning was raised because the
uploader had a cached EGLImage already filled but the formats were not
synchronized accordingly.
The "field-order" is related for all interlace_mode modes except the
"progressive" mode. So instead of or'ing each mode we can use the
already supported GST_VIDEO_INFO_IS_INTERLACED macro.
This makes a pipeline below works:
little endian:
gst-launch-1.0 videotestsrc ! video/x-raw,format=P010_10LE ! glimagesink
big endian:
gst-launch-1.0 videotestsrc ! video/x-raw,format=P010_10BE ! glimagesink
gst_meta_api_type_register() assumes that the last tags element is null, but it wasn't
==17422==ERROR: AddressSanitizer: global-buffer-overflow on address 0x7f4e2a67c998 at pc 0x7f4e2a0c92ac bp 0x7ffcc41f80b0 sp 0x7ffcc41f80a0
READ of size 8 at 0x7f4e2a67c998 thread T0
#0 0x7f4e2a0c92ab in gst_meta_api_type_register ../subprojects/gstreamer/gst/gstmeta.c:94
#1 0x7f4e2a5582c3 in gst_video_afd_meta_api_get_type ../subprojects/gst-plugins-base/gst-libs/gst/video/video-anc.c:1146
#2 0x404c7c in invoke_get_type (/home/ubuntu/gst-build/build/tmp-introspect5gv1rovo/GstVideo-1.0+0x404c7c)
#3 0x406b5c in dump_irepository (/home/ubuntu/gst-build/build/tmp-introspect5gv1rovo/GstVideo-1.0+0x406b5c)
#4 0x407089 in main (/home/ubuntu/gst-build/build/tmp-introspect5gv1rovo/GstVideo-1.0+0x407089)
#5 0x7f4e295b4b6a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x26b6a)
#6 0x404479 in _start (/home/ubuntu/gst-build/build/tmp-introspect5gv1rovo/GstVideo-1.0+0x404479)
0x7f4e2a67c998 is located 40 bytes to the left of global variable 'tags' defined in '../subprojects/gst-plugins-base/gst-libs/gst/video/video-anc.c:1232:25' (0x7f4e2a67c9c0) of size 24
0x7f4e2a67c998 is located 0 bytes to the right of global variable 'tags' defined in '../subprojects/gst-plugins-base/gst-libs/gst/video/video-anc.c:1141:25' (0x7f4e2a67c980) of size 24
SUMMARY: AddressSanitizer: global-buffer-overflow ../subprojects/gstreamer/gst/gstmeta.c:94 in gst_meta_api_type_register
Add max-reorder property to make the old hard coded reordering limit of
100 configurable. It's particularly useful in some scenarios to set
max-reorder=0 to disable the behavior that the depayloader will drop
packets.
Note that although the default value is 100, the default limit has
increased with one because of the changed if-test. This was done to
allow the max-reorder value to be more intuitive. See tests.
Instead of checking if the requested GL API is GLES2 (because ANY can
be set) the string is matched with the GLES2 prefix, and if so, then
the string is offset.
RFC 7826 recommends (but does not require) starting at 0,
but at least one known server implementation fails to copy
request sequence numbers <1 into responses due to an
incorrect null check.
The server known to exhibit this behavior is the Parrot
Streaming Server, serving video from their UAV devices.
A fix has been submitted upstream as well:
https://github.com/Parrot-Developers/librtsp/pull/2
The Parrot developers are known to have tested with LibVLC.
In WireShark debugging, LibVLC appears to start with a CSeq
of 2, which is likely why this bug went unnoticed.
This reverts 487595a7d6, which set this to 0 citing the
RFC. The switch to 0 was thus a recent one; it's therefore
possible server implementors relied on the previous
GStreamer client behavior in their tests as well.
Fixes#624.