Otherwise, when the application reuses the same UIView, we were getting
draw notifications on the previous view/layer's which weren't valid anymore
and were referencing pointers that had been freed.
https://bugzilla.gnome.org/show_bug.cgi?id=753003
- xcb is supposedly thread-safe!
videotestsrc ! glimagesink now doesn't spuriously result in a
'call XInitThreads()' error however if anybody else is using X11,
then XInitThreads() still needs to be called and multiple glimagesink's
still need XInitThreads().
Everything still takes libX11 handles as they are compatible with the xcb
variants. Unfortunately we cannot move fully over to xcb due to GLX being
entirely based on Xlib. It's also impossible to transform a xcb_connection
to a Display which means we require X11 handles.
The spec allows the core/compatibility profiles to be used
with #version 150.
Also tighten up the tests to check for default profiles being chosen
correctly.
The change to use GST_EXPORT for symbols under Windows requires
GST_EXPORTS for internal use, and that is also needed under Autotools.
The same thing is done for gstreamer-1.0.dll in -core.
The calling convention may be deprecated, but we still need it for
OpenGL. The build issue was caused by an incorrect syntax being used for
the WINAPI (__stdcall) prototype in function pointers which was accepted
by GCC but is rejected by MSVC.
With MSVC, this gives the following warning:
warning C4305: 'function': truncation from 'double' to 'gfloat'
Apparently, MSVC does not figure out what type to use for constants
based on the assignment. This warning is very spammy, so let's try to
fix it.
At minimum, we only need to glFlush() if we are in a shared GL context
environment. Move the glFinish() to when the actual wait is requested
which may be never. Improves the throughput on older GL systems without
GL3/GLES3 and/or fence sync objects.
Using g_thread_join() in _finalize() handlers may result in a deadlock
joining the current thread when the last reference is held by a signal
handler.
e.g.:
error 'Resource deadlock avoided' during 'pthread_join (pt->system_thread, NULL)'
The backtrace looks like this:
[...]
g_thread_join ()
gst_gl_window_finalize ()
gst_gl_window_x11_finalize ()
g_object_unref ()
g_value_unset ()
g_signal_emit_valist ()
g_signal_emit ()
gst_gl_window_send_mouse_event ()
gst_gl_window_mouse_event_cb ()
g_main_dispatch ()
[..]
g_main_loop_run ()
gst_gl_window_navigation_thread ()
g_thread_proxy ()
start_thread ()
clone ()
We cannot set the x, y coordinate of the video frame at the dispmanx at
this point. We need to teach dispmanx backend to understand about
set_render_rectangle API to draw a video with other UI.
This patch keeps the current behavior which places video frame at the
center of the display if there is no set_render_rectangle call to the
dispmanx window.
https://bugzilla.gnome.org/show_bug.cgi?id=766018
e.g. passing with_gl_api=gles2 would still build the glx code but not be
linking against the libGL library which is where the glX* functions are
located and would result in a linker error.
Solved by checking for the libGL library if either opengl or glx may be
needed and then disabling the corresponding deps as requested.
The tests were broken since 91fea30, which changed glupload to return
GST_GL_UPLOAD_RECONFIGURE if the texture target in the input buffers doesn't
match the texture-target configured in the output caps.
This commit fixes that and adds more checks for the new behaviour.
Now when used with video/x-raw as input, the GLMemoryUpload method checks for
->tex_target in input GLMemory(es) and sets the output texture-target
accordingly.
Fixes video corruption with a pipeline like avfvideosrc ! video/x-raw !
glimagesink where on macos avfvideosrc pushes RECTANGLE textures but glupload
was configuring texture-target=2D as output.
Don't set the chosen texture-target into the wrong structure.
The input caps may not be writable, and in any case - the
intention was to configure the othercaps. Also, remove an
extra unref - the othercaps ref is consumed by
gst_caps_make_writable already.
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Matej Knopp <matej.knopp@gmail.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
Multiple threads may be accessing the wayland fd at the same time which
requires the use of special wayland API to deal with to ensure nobody
will steal reads and cause a stall for anyone else.
When connect to qmlglsrc, x11 event loop will be replace by qt event loop
which will cause the window cannot receive event from xserver, such as resize
https://bugzilla.gnome.org/show_bug.cgi?id=768160
Makes infinitely more sense and implementation were expecting that behaviour
anyway and would enter a resize, draw, resize, draw, ... cycle instead of only
resizing once.
There's no need for the jump to an extra thread in most cases, especially
when relying solely on a shader to render. We can use the provided
render_to_target() functions to simplify filter writing.
Facilities are given to create fbo's and attach GL memory (renderbuffers
or textures). It also keeps track of the renderable size for use with
effective use with glViewport().
Calling glUniformMatrix before the shader is bound is invalid and
would result in errors like:
GL_INVALID_OPERATION in glUniformMatrix(program not linked)
Move glUniformMatrix() to after the gst_gl_shader_use() call.
Rather than assuming something. e.g. zerocopy on iOS with GLES3 requires
the use of Luminance/Luminance Alpha formats and does not work with
Red/RG textures.
Take the used texture type from the memory instead.
Fixes conversion from multi-planar YUV formats with two components per plane
(NV12, NV21, YUY2, UYVY, GRAY16_*, etc) with Luminance Alpha input textures.
This is also needed for zerocopy decoding on iOS with GLES 3.x.
The intention was to assert if both maj and min were NULL (as there would be no
point calling the function). Instead if either maj or min were NULL, the assert
would occur.
Fix that.
Newer devices require using a different GLSL extension for accessing
external-oes textures in a shader using the texture() functions.
While the GL_OES_EGL_image_external_essl3 should supposedly be supported
on a any GLES3 android device, the extension was defined after a lot of the
older drivers were built so they will not know about it. Thus there are two
possible interpretations of which of texture[2D]() should be supported for
external-oes textures. Strict adherence to the GL_OES_EGL_image_external
extension spec which uses texture2D() or following GLES3's pattern, also
allowing texture() as a function for accessing external-oes textures
This adds another mangling pass to convert
#extension GL_OES_EGL_image_external : ...
into
#extension GL_OES_EGL_image_external_essl3 : ...
on GLES3 and when the GL_OES_EGL_image_external_essl3 extension is supported.
Only uses texture() when the GLES3 and the GL_OES_EGL_image_external_essl3
extension is supported for external-oes textures.
Uses GLES2 + texture2D() + GL_OES_EGL_image_external in all other external-oes
cases.
https://bugzilla.gnome.org/show_bug.cgi?id=766993
Otherwise we will leak GstGLContext's when adding the same context more than
once.
Fixes a regression caused by 5f9d10f603 in the
gstglcontext unit test that failed with:
Assertion 'tmp == NULL' failed
Provide a function to get the affine matrix in the meta in terms of NDC
coordinates and use as a standard opengl matrix.
Also advertise support for the affine transformation meta in the allocation
query.
Because current GstEGLImageMemory does not inherit GstGLMemory, GLUpload
allocates additional GLMemory and upload the decoded contents from the decoder
which uses EGLImage (e.g. gst-omx in RPi).
This work adds GstGLMemoryEGL to avoid this overhead. Decoders allocate
GstGLMemoryEGL and decode its contents to the EGLImage of GstGLMemoryEGL. And
GLUpload uses this memory without allocation of additional textures and blit
operations.
[Matthew Waters]: gst-indent the sources and fix a critical retreiving the egl
display from the memory.
https://bugzilla.gnome.org/show_bug.cgi?id=760916
Allows creating wrapped memories with GstGLAllocationParams.
The wrapped pointers will be set in the parameters before being passed
to the memory allocation function.
Some platforms provide an old version of GLES2/gl2.h and GLES2/gl2ext.h that
will fail when including GLES3/gl3.h due to missing typedef's.
Seen on the RPi.
Don't create many short lived locks/conds in gst_gl_window_send_message. This is
a micro optimization to save a bunch of pthread_* calls which are expensive on
OSX/iOS and possibly other platforms.
while calling eglCreateImage without a GL context current in the executing
thread works on the RPi, some other implementations will return errors.
Marshall the eglCreateImage to the GL thread to appease these implementations.
There are numerous slight differences required between Desktop GL and GLES3 for
multiple render targets.
1. gl_FragData doesn't exist at all and one is required to use
'layout (location = ?) out ...' instead.
2. gl_FragColor doesn't exist, same as 1
3. texture2D() has been deprecated
Fortunately most of these have been taken care of with GL3 and the shader
mangling already exists so just expand the conditions they are used in. The
gl_FragData issue requires a new mangle pass though. We also use this new
pass on desktop GL for consistency.
By default, reading GL_RED or GL_RG us unsupported by glReadPixels unless
exposed through GL_COLOR_READ_IMPLEMENTATION_FORMAT/TYPE. This allows
downloading multiple-planar video frames where possible.
There are some cases where it's needed for binding in/out variables in shaders.
e.g. glsl 150 (gl 3.2) doesn't support the 'layout (location = ?)' specifiers in
the shader source so we have to bind them ourselves.
Without the GST_GL_API_GLES2 bit set, we will not even attempt to look
for the function pointers in the core library and will fallback to
glFlush/glFinish.
If the user uploads their own texture without setting the unpack length, then
then the result will have the appearance of stride mismanagement due to
an incorrect row length.
If we are given caps with extra features (like the overlay composition
features), we can only deal with that when we are in passthrough mode.
Previously we were bailing entirely and not allowing passthrough filter elements
with things like textoverlay.
Fixes the following pipeline (assuming glfilter supports passthrough):
gl ! textoverlay ! glfilter ! ... ! glimagesinkelement
https://bugzilla.gnome.org/show_bug.cgi?id=763756
When transforming, xplode it out into the necessary caps features both
with and without the passthough features.
Fixes negotiation in the following class of pipelines:
gl ! textoverlay ! glupload ! glimagesinkelement
https://bugzilla.gnome.org/show_bug.cgi?id=763756
GL 1.4 (with GL_ARB_shader_objects) doesn't have glIsProgram or glIsShader
equivalents. As they are simply assertions, skip them when there isn't a
valid function pointer.
With e38af23044 returning the correct contexts,
gst_gl_display_add_context() was susceptible to causing infinte loops when
adding the same GstGLContext more than once. Fix and add a test for
gst_gl_display_add_context().
Fixes glvideomixer gst-validate tests.
Execute GL calls without marshalling them to the context thread. In the cocoa
and eagl backends calling gst_gl_context_activate is cheap and therefore calling
it on the current thread and serializing GL calls with a per-context lock is
more efficient (faster and has less overhead) than marshalling everything to the
context thread.
This optimization cuts a large overhead in g_poll (continuously waking up the
context thread) and in g_mutex_*/g_cond_* (waiting for results from the context
thread).
When requesting a glcontext (regardless of thread), the result was correct.
However, when requesting current glcontext on a specific thread, it could
come up with a glcontext active on another thread.
https://bugzilla.gnome.org/show_bug.cgi?id=763168
Maxsize is initialized once and should never change. Allocating data
should have no impact on the selected max size for this memory. This
causing memory map failure as the maxsize would become smaller then
size. This happened when using direct rendering in avviddec on GL that
does not support PBO transfer.
https://bugzilla.gnome.org/show_bug.cgi?id=763045
Intended for use with wrapped contexts that are created shared with gst's
gl contexts in order to manage the internal sharegroup state correctly.
e.g. with caopengllayer (which is used in glimagesink and caopengllayersink
on OS X), we create a CGL context from the gst context and the sharing state
was not being correctly set on either GL context and gst_gl_context_is_shared()
was always returning FALSE.
With 11fb4fff80 only flushing with multiple
shared contexts, the required flush was not occuring causing screen
corruption or stuttering.
Note: this didn't affect GST_GL_API=opengl pipelines
https://bugzilla.gnome.org/show_bug.cgi?id=762620
Usually gl debug is initialized in gst_gl_context_create_thread.
But this function is not used when using the GstGLContextGPUProcess
from ChromiumGStreamerBackend.
Received signal 11 SEGV_MAPERR 000000000000
gst_debug_category_get_threshold
gst_gl_insert_debug_marker
gst_gl_base_filter_gl_start
CC libgstgl_x11_la-gstglcontext_glx.lo
In file included from gstglcontext_glx.c:39:0:
../utils/opengl_versions.h:52:43: error: ‘gles2_versions’ defined but not used [-Werror=unused-const-variable]
static const struct { int major, minor; } gles2_versions[] = {
^~~~~~~~~~~~~~
Invoke the callback right away when called on the context thread. Removes
overhead when nesting libgstgl calls (for example when working with the sync
meta).
CPU waits are more expensive and are only required if the CPU is ever going to
access the data. GPU waits perform inter-context synchronisation and are cheaper
as they don't require CPU intervention.
All uses of query->context in gstglquery.c assume it exists. We can assume
this as well before unrefing it. Furthermore, gst_object_unref() will just
silently return if it ever were to not exist.
Improves the responsiveness of the pipeline when resources are close/above the
limitations of the hardware.
Any subclass that wishes not to enable qos can do so themselves.
https://bugzilla.gnome.org/show_bug.cgi?id=761519
This reverts commit 96b9666d59.
This reverts commit d11385d167.
This breaks the texture sharing with the applemedia elements as
CVOpenGLESTextureCache seems to have an arbitrary restriction on GLES2 only.
We may need them to transform into a different set of formats.
Fixes YUV->YUV with two glcolorconverts, e.g:
format=I420 ! glcolorconvert ! glcolorconvert ! format=NV12
1.0 / width does not offset by one pixel in rectangular textures (which use
unnormalized coordinates).
Provide the actual pixel offset as a uniform to the shader.
1. Correctly describe what we can caps we can transform to/from.
i.e. no YUV->YUV or GRAY->YUV or YUV->GRAY (except for passthrough).
2. Prefer similar formats and ignore incompatible formats on fixation.
Useful when gst_gl_window.c::gst_gl_window_new is not used.
This is the case when using a custom GstGLWindow.
(ex: GstGLWindowGPUProcess from Chromium)
The messages are stored by gst_gl_async_debug_store_log_msg() and output later
by a corresponding store(), output() or an unset()/free().
Some wrapper macros are provided to avoid callers explicitly using __FILE__,
GST_FUNCTION and __LINE__
This makes a pipeline like:
... ! video/x-raw(memory:GLMemory),format=UYVY ! glcolorconvert !
video/x-raw(memory:GLMemory),format={UYVY, NV12} ! ...
passthrough instead of converting UYVY => NV12. The conversion would happen
before this change since the element (and basetransform) transform the src caps
to format={NV12, UYVY} (since NV12 comes first in the glcolorconvert:src
template) and then the default caps fixate func would fixate to NV12. Blah.
Also there's no need to intersect against the template caps in ::transform_caps
since basetransform does that right after calling the vfunc.
The optimistic download_transfer was not setting the required flag to not
perform glReadPixels on subsequent map (READ). resulting in glReadPixels
happening twice.
Some operations are unnecessary when running with only a single GL
context.
e.g. glFlush when setting a fence object as the flush happens on wait.
API: gst_gl_context_is_shared
1. Various elements/base classes only perform a subset check on accept-caps
2. Some GL elements have texture-target in their pad template
3. When checking subsets, only the caps to check are allowed to contain extra
fields. If the 'template' caps have extra fields, the subset fails.
Thus without texture-target on the caps, various accept-caps implementations
were failing.
Also, add some convenience functions for setting and retrieving
texture targets to/from GValue.
https://bugzilla.gnome.org/show_bug.cgi?id=759860
GL 2.1 only supports pbo upload.
The wrapped data pointer was only being set on the pbo memory and on the
glmemory so when a download was requested (in GL 2.1), glmemory was
allocating a new data pointer and thus not returning the wrapped data.
Exposing the navigation thread's main context, GSourceFuncs and structs called
key_event and mouse_event is exposing a bit too much of the internals. Let's
just go with two functions to asynchronously send navigation events on the
window with the same API as the synchronous ones.
This upload method detect and optimize uploads of DMABuf memory. This is
done by creating and caching EGLImages wrapper around DMABuf. The
EGLImages are then binded to a texture which get converter using
standard shader.
Example pipeline:
GST_GL_PLATFORM=egl \
gst-launch-1.0 v4l2src device=/dev/video1 io-mode=4 ! \
video/x-raw,format=NV12 ! glimagesink
https://bugzilla.gnome.org/show_bug.cgi?id=743345
Maps GstVideoFormats to suitable DRM fourccs which work with
glcolorconvert, using gst_gl_memory_alloc(). We require mostly
only 4 formats to be supported by the driver. We require DRM
equivalent to RGB16, RGBA, R8 and RG88. This way it's compatible with
DesktopGL, since GL_TEXTURE_2D is used and limit driver requirements.
With this we can virtually support all formats the glcolorconvert
supports.
https://bugzilla.gnome.org/show_bug.cgi?id=743345
Add gst_gl_memory_allocator_get_default to get the default allocator based on
the opengl version. Allows us to stop hardcoding the PBO allocator which isn't
supported on gles2.
Fixes GL upload on iOS9 among other things.
e.g when wrapping a data pointer we don't want to map/unmap off the end of
pointer with the alignment bytes.
Instead track that information separately as maxsize is used for mapping by
GstMemory and thus represents a size without any alignment padding bytes.
Requires the usage of GstGLVideoAllocationParams however any user can set their
own parameters along with an allocator which will be used to allocate the
correct memory type.
- Create GstGLVideoAllocationParams which is a GstGLAllocationParams subclass.
- Make it possible to allocate glmemory objects directly if no frills are
needed.
This is made possible by a subclassable GstGLAllocationParams that holds
the allocation parameters
Every allocation would now go through gst_gl_base_memory_alloc with the
allocation parameters now being specified in a single struct to allow
extension by different allocators.
The imported memory has already been allocated, passing allocation
parameters with alignment confuses the memory which endup with a
size different from maxsize and lead to overrun when the memory
is being copied.
GCC automatically disable redundance warnings for system headers. As
soon as we start using a non-system installed mesa, we would start
having issues. The test for both wasn't setting any flags, so it would
work but then fail at runtime.
This is being fixed by disabling in the code (where needed only) that
GCC warning. The test is also fixed to avoid the false positive we had.