Exposing the navigation thread's main context, GSourceFuncs and structs called
key_event and mouse_event is exposing a bit too much of the internals. Let's
just go with two functions to asynchronously send navigation events on the
window with the same API as the synchronous ones.
This upload method detect and optimize uploads of DMABuf memory. This is
done by creating and caching EGLImages wrapper around DMABuf. The
EGLImages are then binded to a texture which get converter using
standard shader.
Example pipeline:
GST_GL_PLATFORM=egl \
gst-launch-1.0 v4l2src device=/dev/video1 io-mode=4 ! \
video/x-raw,format=NV12 ! glimagesink
https://bugzilla.gnome.org/show_bug.cgi?id=743345
Maps GstVideoFormats to suitable DRM fourccs which work with
glcolorconvert, using gst_gl_memory_alloc(). We require mostly
only 4 formats to be supported by the driver. We require DRM
equivalent to RGB16, RGBA, R8 and RG88. This way it's compatible with
DesktopGL, since GL_TEXTURE_2D is used and limit driver requirements.
With this we can virtually support all formats the glcolorconvert
supports.
https://bugzilla.gnome.org/show_bug.cgi?id=743345
Add gst_gl_memory_allocator_get_default to get the default allocator based on
the opengl version. Allows us to stop hardcoding the PBO allocator which isn't
supported on gles2.
Fixes GL upload on iOS9 among other things.
e.g when wrapping a data pointer we don't want to map/unmap off the end of
pointer with the alignment bytes.
Instead track that information separately as maxsize is used for mapping by
GstMemory and thus represents a size without any alignment padding bytes.
Requires the usage of GstGLVideoAllocationParams however any user can set their
own parameters along with an allocator which will be used to allocate the
correct memory type.
- Create GstGLVideoAllocationParams which is a GstGLAllocationParams subclass.
- Make it possible to allocate glmemory objects directly if no frills are
needed.
This is made possible by a subclassable GstGLAllocationParams that holds
the allocation parameters
Every allocation would now go through gst_gl_base_memory_alloc with the
allocation parameters now being specified in a single struct to allow
extension by different allocators.
The imported memory has already been allocated, passing allocation
parameters with alignment confuses the memory which endup with a
size different from maxsize and lead to overrun when the memory
is being copied.
GCC automatically disable redundance warnings for system headers. As
soon as we start using a non-system installed mesa, we would start
having issues. The test for both wasn't setting any flags, so it would
work but then fail at runtime.
This is being fixed by disabling in the code (where needed only) that
GCC warning. The test is also fixed to avoid the false positive we had.
The base class is useful for having multiple backing memory types other
than the default. e.g. IOSurface, EGLImage, dmabuf?
The PBO transfer logic is now inside GstGLMemoryPBO which uses GstGLBuffer
to manage the PBO memory.
This also moves the format utility functions into their own file.
Heavily based on GstGLBaseBuffer that is a subclass of GstGLBaseMemory.
Provides GPU and CPU accessible GL buffer objects by GL handle or by
sysmem data pointer.
It handles the following
- GstAllocationParams -> gst_memory_init transformation
- Makes sure that map/unmap/create/destroy happen on the GL thread with
a GL context current.
- Holds a possible sysmem accessible data pointer with alignment.
- Holds the need upload/download transfer state
Rectangle textures don't use normalized coordinates so subsampling needs to be
factored in explicitly.
Fixes YUV => RGB conversion for rectangle textures.
To use GLMemory and EGLImage allocators, one need to know the
libgstgl API. This is only expected if the associated caps features
have been negotiated. Generic element that otherwise receive those
allocators may fail, resulting in broken pieline. We don't want to
force all generic element to check if the allocator is a custom
allocator or a normal allocator (which implement the _alloc method).
https://bugzilla.gnome.org/show_bug.cgi?id=758877
gstglsyncmeta.c -fPIC -DPIC -o .libs/libgstgl_1.0_la-gstglsyncmeta.o
gstglsyncmeta.c: In function 'gst_buffer_add_gl_sync_meta':
gstglsyncmeta.c:131:1: error: control reaches end of non-void function [-Werror=return-type]
}
^
there could be other ways/requirements for synchronising two GPU command
streams (whether GL or platform specific).
e.g. glfencesync/eglwaitnative/cond/etc
This no longer does anything, and it was marked as CONSTRUCT_ONLY
which means someone would really have to go out of their way to
be able to set this, which would only be done in very custom
scenarios, if ever, and those will likely target a specific
version of GStreamer then, so probably not much point keeping
it deprecated for a while before removing it.
given a NULL-terminated string, s.
s[i] = '\0';
i++;
does not guarentee that s[i] is NULL terminated and thus string operations
could read off the end of the array.
https://bugzilla.gnome.org/show_bug.cgi?id=758039
gst_gl_shader_detach_unlocked already removes the list entry so attempting to
use the element to iterate to the next stage could read invalid data.
Based on patch by Vineeth TM <vineeth.tm@samsung.com>
https://bugzilla.gnome.org/show_bug.cgi?id=758039
Some drivers don't provide the compatibility definition and we need to provide
our own 'out vec4' variable to put the results of the fragment shader into.
https://bugzilla.gnome.org/show_bug.cgi?id=757938
Rectangular textures are unavailable in unextended
GLES2 #version 100 shaders.
Fixes
texture-target=rectangle ! glcolorconvert ! texture-target=2D
There's a couple of differences between GL3 and GLES2/GL
- varying -> in or out depending on the stage (vertex/fragment)
- attribute -> in
- filtered texture access is a single function, texture()
Bitrate estimation is now handled through a queue2 element added after
the source element used to download fragments.
Original hlsdemux patch by Duncan Palmer <dpalmer@digisoft.tv>
https://bugzilla.gnome.org/show_bug.cgi?id=733959
This code will never be called as max>=min in all cases. If the upstream
latency query returned min>max, the function already returned and all
values that are added to those have max>= min.
Make gst_gl_context_gen_shader/_compile_shader assume GST_GLSL_PROFILE_ES |
GST_GLSL_PROFILE_COMPATIBILITY as the profile. Without this, the shader compiler
doesn't inject the #version tag resulting in a compilation error on Mountain
Lion.
This is a workaround for old code using gst_gl_context_gen_shader. New code
should use the gst_glsl_stage_* API directly which allows the caller to
explicitly specify version/profile.
The ret variable may be uninitialized and so its contents were undefined and
the results were erratic (failing with glvideomixer, succeeding in other cases)
P.S. No idea why gcc/clang et al never picked up on this like they normally do
(probably due to some optimisation pass figuring out it's only set once...)