Otherwise, when the application reuses the same UIView, we were getting
draw notifications on the previous view/layer's which weren't valid anymore
and were referencing pointers that had been freed.
https://bugzilla.gnome.org/show_bug.cgi?id=753003
- xcb is supposedly thread-safe!
videotestsrc ! glimagesink now doesn't spuriously result in a
'call XInitThreads()' error however if anybody else is using X11,
then XInitThreads() still needs to be called and multiple glimagesink's
still need XInitThreads().
Everything still takes libX11 handles as they are compatible with the xcb
variants. Unfortunately we cannot move fully over to xcb due to GLX being
entirely based on Xlib. It's also impossible to transform a xcb_connection
to a Display which means we require X11 handles.
The spec allows the core/compatibility profiles to be used
with #version 150.
Also tighten up the tests to check for default profiles being chosen
correctly.
The change to use GST_EXPORT for symbols under Windows requires
GST_EXPORTS for internal use, and that is also needed under Autotools.
The same thing is done for gstreamer-1.0.dll in -core.
The calling convention may be deprecated, but we still need it for
OpenGL. The build issue was caused by an incorrect syntax being used for
the WINAPI (__stdcall) prototype in function pointers which was accepted
by GCC but is rejected by MSVC.
With MSVC, this gives the following warning:
warning C4305: 'function': truncation from 'double' to 'gfloat'
Apparently, MSVC does not figure out what type to use for constants
based on the assignment. This warning is very spammy, so let's try to
fix it.
At minimum, we only need to glFlush() if we are in a shared GL context
environment. Move the glFinish() to when the actual wait is requested
which may be never. Improves the throughput on older GL systems without
GL3/GLES3 and/or fence sync objects.
Using g_thread_join() in _finalize() handlers may result in a deadlock
joining the current thread when the last reference is held by a signal
handler.
e.g.:
error 'Resource deadlock avoided' during 'pthread_join (pt->system_thread, NULL)'
The backtrace looks like this:
[...]
g_thread_join ()
gst_gl_window_finalize ()
gst_gl_window_x11_finalize ()
g_object_unref ()
g_value_unset ()
g_signal_emit_valist ()
g_signal_emit ()
gst_gl_window_send_mouse_event ()
gst_gl_window_mouse_event_cb ()
g_main_dispatch ()
[..]
g_main_loop_run ()
gst_gl_window_navigation_thread ()
g_thread_proxy ()
start_thread ()
clone ()
We cannot set the x, y coordinate of the video frame at the dispmanx at
this point. We need to teach dispmanx backend to understand about
set_render_rectangle API to draw a video with other UI.
This patch keeps the current behavior which places video frame at the
center of the display if there is no set_render_rectangle call to the
dispmanx window.
https://bugzilla.gnome.org/show_bug.cgi?id=766018
e.g. passing with_gl_api=gles2 would still build the glx code but not be
linking against the libGL library which is where the glX* functions are
located and would result in a linker error.
Solved by checking for the libGL library if either opengl or glx may be
needed and then disabling the corresponding deps as requested.
The tests were broken since 91fea30, which changed glupload to return
GST_GL_UPLOAD_RECONFIGURE if the texture target in the input buffers doesn't
match the texture-target configured in the output caps.
This commit fixes that and adds more checks for the new behaviour.
Now when used with video/x-raw as input, the GLMemoryUpload method checks for
->tex_target in input GLMemory(es) and sets the output texture-target
accordingly.
Fixes video corruption with a pipeline like avfvideosrc ! video/x-raw !
glimagesink where on macos avfvideosrc pushes RECTANGLE textures but glupload
was configuring texture-target=2D as output.
Don't set the chosen texture-target into the wrong structure.
The input caps may not be writable, and in any case - the
intention was to configure the othercaps. Also, remove an
extra unref - the othercaps ref is consumed by
gst_caps_make_writable already.
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Matej Knopp <matej.knopp@gmail.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
Multiple threads may be accessing the wayland fd at the same time which
requires the use of special wayland API to deal with to ensure nobody
will steal reads and cause a stall for anyone else.
When connect to qmlglsrc, x11 event loop will be replace by qt event loop
which will cause the window cannot receive event from xserver, such as resize
https://bugzilla.gnome.org/show_bug.cgi?id=768160
Makes infinitely more sense and implementation were expecting that behaviour
anyway and would enter a resize, draw, resize, draw, ... cycle instead of only
resizing once.
There's no need for the jump to an extra thread in most cases, especially
when relying solely on a shader to render. We can use the provided
render_to_target() functions to simplify filter writing.
Facilities are given to create fbo's and attach GL memory (renderbuffers
or textures). It also keeps track of the renderable size for use with
effective use with glViewport().
Calling glUniformMatrix before the shader is bound is invalid and
would result in errors like:
GL_INVALID_OPERATION in glUniformMatrix(program not linked)
Move glUniformMatrix() to after the gst_gl_shader_use() call.
Rather than assuming something. e.g. zerocopy on iOS with GLES3 requires
the use of Luminance/Luminance Alpha formats and does not work with
Red/RG textures.
Take the used texture type from the memory instead.
Fixes conversion from multi-planar YUV formats with two components per plane
(NV12, NV21, YUY2, UYVY, GRAY16_*, etc) with Luminance Alpha input textures.
This is also needed for zerocopy decoding on iOS with GLES 3.x.
The intention was to assert if both maj and min were NULL (as there would be no
point calling the function). Instead if either maj or min were NULL, the assert
would occur.
Fix that.
Newer devices require using a different GLSL extension for accessing
external-oes textures in a shader using the texture() functions.
While the GL_OES_EGL_image_external_essl3 should supposedly be supported
on a any GLES3 android device, the extension was defined after a lot of the
older drivers were built so they will not know about it. Thus there are two
possible interpretations of which of texture[2D]() should be supported for
external-oes textures. Strict adherence to the GL_OES_EGL_image_external
extension spec which uses texture2D() or following GLES3's pattern, also
allowing texture() as a function for accessing external-oes textures
This adds another mangling pass to convert
#extension GL_OES_EGL_image_external : ...
into
#extension GL_OES_EGL_image_external_essl3 : ...
on GLES3 and when the GL_OES_EGL_image_external_essl3 extension is supported.
Only uses texture() when the GLES3 and the GL_OES_EGL_image_external_essl3
extension is supported for external-oes textures.
Uses GLES2 + texture2D() + GL_OES_EGL_image_external in all other external-oes
cases.
https://bugzilla.gnome.org/show_bug.cgi?id=766993
Otherwise we will leak GstGLContext's when adding the same context more than
once.
Fixes a regression caused by 5f9d10f603 in the
gstglcontext unit test that failed with:
Assertion 'tmp == NULL' failed
Provide a function to get the affine matrix in the meta in terms of NDC
coordinates and use as a standard opengl matrix.
Also advertise support for the affine transformation meta in the allocation
query.
Because current GstEGLImageMemory does not inherit GstGLMemory, GLUpload
allocates additional GLMemory and upload the decoded contents from the decoder
which uses EGLImage (e.g. gst-omx in RPi).
This work adds GstGLMemoryEGL to avoid this overhead. Decoders allocate
GstGLMemoryEGL and decode its contents to the EGLImage of GstGLMemoryEGL. And
GLUpload uses this memory without allocation of additional textures and blit
operations.
[Matthew Waters]: gst-indent the sources and fix a critical retreiving the egl
display from the memory.
https://bugzilla.gnome.org/show_bug.cgi?id=760916
Allows creating wrapped memories with GstGLAllocationParams.
The wrapped pointers will be set in the parameters before being passed
to the memory allocation function.
Some platforms provide an old version of GLES2/gl2.h and GLES2/gl2ext.h that
will fail when including GLES3/gl3.h due to missing typedef's.
Seen on the RPi.
Don't create many short lived locks/conds in gst_gl_window_send_message. This is
a micro optimization to save a bunch of pthread_* calls which are expensive on
OSX/iOS and possibly other platforms.
while calling eglCreateImage without a GL context current in the executing
thread works on the RPi, some other implementations will return errors.
Marshall the eglCreateImage to the GL thread to appease these implementations.
There are numerous slight differences required between Desktop GL and GLES3 for
multiple render targets.
1. gl_FragData doesn't exist at all and one is required to use
'layout (location = ?) out ...' instead.
2. gl_FragColor doesn't exist, same as 1
3. texture2D() has been deprecated
Fortunately most of these have been taken care of with GL3 and the shader
mangling already exists so just expand the conditions they are used in. The
gl_FragData issue requires a new mangle pass though. We also use this new
pass on desktop GL for consistency.
By default, reading GL_RED or GL_RG us unsupported by glReadPixels unless
exposed through GL_COLOR_READ_IMPLEMENTATION_FORMAT/TYPE. This allows
downloading multiple-planar video frames where possible.
There are some cases where it's needed for binding in/out variables in shaders.
e.g. glsl 150 (gl 3.2) doesn't support the 'layout (location = ?)' specifiers in
the shader source so we have to bind them ourselves.
Without the GST_GL_API_GLES2 bit set, we will not even attempt to look
for the function pointers in the core library and will fallback to
glFlush/glFinish.
If the user uploads their own texture without setting the unpack length, then
then the result will have the appearance of stride mismanagement due to
an incorrect row length.
If we are given caps with extra features (like the overlay composition
features), we can only deal with that when we are in passthrough mode.
Previously we were bailing entirely and not allowing passthrough filter elements
with things like textoverlay.
Fixes the following pipeline (assuming glfilter supports passthrough):
gl ! textoverlay ! glfilter ! ... ! glimagesinkelement
https://bugzilla.gnome.org/show_bug.cgi?id=763756
When transforming, xplode it out into the necessary caps features both
with and without the passthough features.
Fixes negotiation in the following class of pipelines:
gl ! textoverlay ! glupload ! glimagesinkelement
https://bugzilla.gnome.org/show_bug.cgi?id=763756
GL 1.4 (with GL_ARB_shader_objects) doesn't have glIsProgram or glIsShader
equivalents. As they are simply assertions, skip them when there isn't a
valid function pointer.
With e38af23044 returning the correct contexts,
gst_gl_display_add_context() was susceptible to causing infinte loops when
adding the same GstGLContext more than once. Fix and add a test for
gst_gl_display_add_context().
Fixes glvideomixer gst-validate tests.
Execute GL calls without marshalling them to the context thread. In the cocoa
and eagl backends calling gst_gl_context_activate is cheap and therefore calling
it on the current thread and serializing GL calls with a per-context lock is
more efficient (faster and has less overhead) than marshalling everything to the
context thread.
This optimization cuts a large overhead in g_poll (continuously waking up the
context thread) and in g_mutex_*/g_cond_* (waiting for results from the context
thread).
When requesting a glcontext (regardless of thread), the result was correct.
However, when requesting current glcontext on a specific thread, it could
come up with a glcontext active on another thread.
https://bugzilla.gnome.org/show_bug.cgi?id=763168
Maxsize is initialized once and should never change. Allocating data
should have no impact on the selected max size for this memory. This
causing memory map failure as the maxsize would become smaller then
size. This happened when using direct rendering in avviddec on GL that
does not support PBO transfer.
https://bugzilla.gnome.org/show_bug.cgi?id=763045