We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
When outputting more than two channels, a channel-mask has to be
specified in the output caps.
We follow the same heuristic as other cases, when downstream
does not specify a channel-mask, we use that of the first
configured pad, and if there was none we generate a fallback
mask.
https://bugzilla.gnome.org/show_bug.cgi?id=794257
The various id3v2 specs handle the extended header sizes differently
(because hey, it wouldn't be fun otherwise).
http://id3.org/id3v2.3.0 states:
"Where the 'Extended header size', currently 6 or 10 bytes, excludes
itself."
http://id3.org/id3v2.4.0-structure states:
Extended header size 4 * %0xxxxxxx
Number of flag bytes $01
Extended Flags $xx
Where the 'Extended header size' is the size of the whole extended
header, stored as a 32 bit synchsafe integer. An extended header can
thus never have a size of fewer than six bytes.
So in id3v2.4.0 it's the *whole* extended header size (a-la ISOBMFF
atom), whereas in id3v2.3.0 it's the extended header size *excluding*
those 4 initial bytes.
And for other versions, god knows..
Fixes regression introduced in commit da607005.
https://bugzilla.gnome.org/show_bug.cgi?id=792983
Allow for sub-classes to change pad templates to
support other texture targets, and bind input textures
accordingly.
When setting the caps, also store the texture target.
By default, glfilter only reports 2D texture targets
in the default caps, but sub-classes can change that
and it would be nice if they could easily find out
which texture targets were negotiated.
This adds 2 fields to the public struct, but since
it's unreleased -base API, it's not an ABI break.
This prevents cross compilation errors like:
usr/include/xf86drm.h:40:10: fatal error: drm.h: No such file or directory
These are caused because gstgldisplay_gbm.h includes xf86drm.h .
https://bugzilla.gnome.org/show_bug.cgi?id=793837
It is the only thing gst_pb_utils_init() does and it could be
automatically called from the places in pbutils it is needed.
After 1.14 we should deprecate gst_pb_utils_init().
https://bugzilla.gnome.org/show_bug.cgi?id=793611
The current GstVideoRegionOfInterestMeta API allows elements to detect
and name ROI but doesn't tell anything about how this information is
meant to be consumed by downstream elements.
Typically, encoders may want to tweak their encoding settings for a
given ROI to increase or decrease their quality.
Each encoder has its own set of settings so that's not something that
can be standardized.
This patch adds encoder-specific parameters to the meta which can be
used to configure the encoding of a specific ROI.
A typical use case would be: source ! roi-detector ! encoder
with a buffer probe on the encoder sink pad set by the application.
Thanks to the probe the application will be able to tell to the encoder
how this specific region should be encoded.
Users could also develop their specific roi detectors meant to be used with a
specific encoder and directly putting the encoder parameters when
detecting the ROI.
https://bugzilla.gnome.org/show_bug.cgi?id=793338
Performance optimisation: Keep track whenever the streaming
thread or the application thread are waiting on the GCond for
more space or new data, and only signal on the GCond if someone
is actually waiting. Avoids unnecessary syscalls and thus
context switches.
Performance optimisation: Keep track whenever the streaming
thread or the application thread are waiting on the GCond
for more space or new data, and only signal on the GCond if
someone is actually waiting. Avoids unnecessary syscalls and
thus context switches.
When trying to create a wayland display, it may fail because there
is not actually display to connect. It this case NULL is returned
but the created instance is not freed.
This patch unrefs the failed display.
https://bugzilla.gnome.org/show_bug.cgi?id=793483
When the GstRTSPConnection class sends a RTSP over HTTP tunnelling
request, the HTTP Content-Type header is missing from the HTTP POST
request.
This isn't a problem with most servers, but there are servers that
rejects the request without there also being a Content-Type header.
RFC 1945:
Any HTTP/1.0 message containing an entity body should include a
Content-Type header field defining the media type of that body.
Apple Dispatch 28:
QuickTime Streaming uses the "application/x-rtsp-tunnelled" MIME
type in both the Content-Type and Accept headers. This reflects
the data type that is expected and delivered by the client and server.
https://bugzilla.gnome.org/show_bug.cgi?id=793110
The source offset (soff) was not incremented for each component and then
each group of 3 components were inverted. This was causing a staircase
effect combined with some noise.
https://bugzilla.gnome.org/show_bug.cgi?id=789876
This adds a 10 bit variant for NV16 packed into 32 bits little endian
words. The MSB 2 bits are padding. This format is used on Xilinx SoC and
identified with the FOURCC XV20.
https://bugzilla.gnome.org/show_bug.cgi?id=789876
This add a 10bit variant of gray scale packed into 32bits little endian
words. The MSB 2 bits are padding and should be ignored. This format is
used on Xilinx SoC and is identified with the FOURCC XV10.
https://bugzilla.gnome.org/show_bug.cgi?id=789876
This adds a 10bit variant for NV12 which packs 3 10bit components
into little endian 32bit words. The MSB 2 bits are padding and should be
ignored. This format is used on Xilinx SoC and is identified with there
with the FOURCC XV15
https://bugzilla.gnome.org/show_bug.cgi?id=789876
We can pass string constants here to g_strdup_printf(),
so do so and re-enable the -Wformat-nonliteral warning
we had to disable when merging the opengl libs.
If timestamp goes forwards more than allowed, we consider that the
timestamp belongs to the previous counting, so the extended timestamp
is unwrapped.
https://bugzilla.gnome.org/show_bug.cgi?id=783443
Tests and documentation will follow separately.
The mixer elements in the opengl plugin need to stay
in -bad for now since they use GstVideoAggregator.
https://bugzilla.gnome.org/show_bug.cgi?id=754094
This can be used in a generic way as common interface by all platforms
that, in one way or another, pass around physical memory addresses.
This is used by the gl lib and seems useful enough, so might just as
well move it next to the other allocators.
https://bugzilla.gnome.org/show_bug.cgi?id=779067
As most Wayland compositors supports XWayland, X11 backend get
selected. This also realign better GStreamer decision to what
happens with GTK and other stack out there.
This patch adds code to gldownload to export the image as a
dmabuf if requested. The element now exposes memory:DMABuf as
a cap feature, and if it is selected, the element exports the
texture to an EGL image and then a dmabuf. It also implements a
fallback to system memory download in case the exportation failed.
https://bugzilla.gnome.org/show_bug.cgi?id=776927
Undefined symbols for architecture x86_64:
"_gst_gl_context_cocoa_get_type", referenced from:
__create_layer in libgstopengl_la-caopengllayersink.o
Might need some more in other headers, but first need to
clarify what exactly should be exported, there are some
inconsistencies (installed header files vs. funcs in docs).
It causes crashes in applications because the result of
fbGetDisplay() might be in use elsewhere in the application
and Vivante doesn't seem to do any refcounting
This reverts commit 47fd4d391e.
This patch is incorrect. It doesn't actually compile, and causes a crash
because the viv-fb window implementation needs a native EGL handle
to pass to fbCreateWindow, but the GstGLDisplayEGL handleis actually
an EGLDisplay now (and gets cast to the wrong type)
This simplifies the code a lot without any functional changes apart from
not closing the display connection. Closing the display connection is
not safe to do as it is shared between all other code in the same
process and no reference counting or anything happens at the platform
layer.
1. Propagate the GstGLDisplay we create
2. Add the created GstGLContext to the propagated GstGLDisplay
Otherwise with multi-branch GL pipelines involving gtkglsink, things
will fall apart and errors will be genarated somewhere.
Except for gst/gl/gstglfuncs.h
It is up to the client app to include these headers.
It is coherent with the fact that gstreamer-gl.pc does not
require any egl.pc/gles.pc. I.e. it is the responsability
of the app to search these headers within its build setup.
For example gstreamer-vaapi includes explicitly EGL/egl.h
and search for it in its configure.ac.
For example with this patch, if an app includes the headers
gst/gl/egl/gstglcontext_egl.h
gst/gl/egl/gstgldisplay_egl.h
gst/gl/egl/gstglmemoryegl.h
it will *no longer* automatically include EGL/egl.h and GLES2/gl2.h.
Which is good because the app might want to use the gstgl api only
without the need to bother about gl headers.
Also added a test: cd tests/check && make libs/gstglheaders.check
https://bugzilla.gnome.org/show_bug.cgi?id=784779
Make a bunch of symbols private that are currently leaked
accidentally because they have a gst_* prefix and are used
internally. We mark those we can't make static with
G_GNUC_INTERNAL so that they get hidden with the autotools
build as well (although we could just pass -fvisibility=hidden
there too).
Found on rpi when gpu_mem is too low so there is not enough memory to
create the eglimage. But still gst_buffer_pool_acquire_buffer succeeded.
And it leads to a CRITICAL assert:
gst_egl_image_get_image: assertion 'GST_IS_EGL_IMAGE (image)' failed
https://bugzilla.gnome.org/show_bug.cgi?id=785518
Avoids dereferencing dead objects
What happens in the autovideosink case is that context 1 is created and
destroyed before all the async operations hae executed on the associated
window. When the delayed operations execute, they then reference dead
objects and crash.
We fix this by keeping refs over all async operations so the object
cannot be deleted while async operations are in flight.
https://bugzilla.gnome.org/show_bug.cgi?id=782379
Add a function to install the default RGBA pad templates,
but don't make them required so that there can be
GstGLFilter sub-classes with different input/output
caps if they want. Remove the hard-coded RGBA restriction in
the set_caps_features call, as it will be taken care
of by intersecting with the pad templates.
Update all the sub-classes to match
On the raspberry pi no pkg-config file is provided for the bcm_host
library. We are using AC_CHECK_LIB to detect this lib with autotools,
cc.find_library() library is a closer meson equivalent.
https://bugzilla.gnome.org/show_bug.cgi?id=784026
We have to pass the "height" as height = vmeta->offset[1] / width to the
API, which of course does not work well for formats with only a single
plane. Use the whole memory size instead of the offset in that case.
GL_RGB565 is sized internal glformat, the corresponding glformat
should be GL_RGB and type is GL_UNSIGNED_SHORT_565. Otherwise will
return GL_INVALID_ENUM when creating texture.
https://bugzilla.gnome.org/show_bug.cgi?id=783066
With the macOS/iOS implementations, the active thread can change
multiple times over the life of a pipeline which would expose a race in
the thread tracking.
Fix by taking a ref on the active thread while the context is active.
https://bugzilla.gnome.org/show_bug.cgi?id=779202
Otherwise fall back to glDrawBuffers. Also check if glReadBuffer exists
before using it.
glDrawBuffer does not exist for GLES, only glDrawBuffers does.
https://bugzilla.gnome.org/show_bug.cgi?id=782376
meson's configure_file emits only a comment like /* #undef ... */
for values which are unset in the configuration_data. For
gstglconfig.h, this differs from the autotools build where the
preprocessor definitions are always either 0 or 1. So loop over a
list of variables to set to zero as default.
Also sync up the gstglconfig.h.meson file with the additional
macros defined by the autotools build.
https://bugzilla.gnome.org/show_bug.cgi?id=781043
Windows aren't always removed in time, and it turns out to be
very, very hard to remove a window in a way that's not racy and
not deadlocky. Since the window itself doesn't leak, freeing
the list on object destruction is enough.
https://bugzilla.gnome.org/show_bug.cgi?id=781018
The GstGLFramebufferClass struct is typedeffed in
gstgl_fwd.h, and having a duplicate elsewhere is
breaking the cerbero build on my OSX machine,
even though it seems to be working in CI.
In commit
> 956c4d0 gl/format: use our own GL format enum's instead of gstvideo's
the name and return type of gst_gl_format_from_video_info changed,
but some returns of the old type were missed. Here they are
updated to the correct type.
https://bugzilla.gnome.org/show_bug.cgi?id=780064
All code interacting with Objective-C objects should now use Automated
Reference Counting rather than manual memory management or Garbage
Collection. Because ARC prohibits C-structs from containing
references to Objective-C objects, all such fields are now typed
'gpointer'. Setting and gettings Objective-C fields on such a
struct now uses explicit __bridge_* calls to tell ARC about
object lifetimes.
https://bugzilla.gnome.org/show_bug.cgi?id=777847
../../../../gst-libs/gst/gl/gl.h:57:45: fatal error: gst/gl/gstglcontrolbindingproxy.h: No such file or directory
#include <gst/gl/gstglcontrolbindingproxy.h>
^
No-one's using/depending on it (it would have criticalled and not worked)
and it's causing more problems than it's solving. Store the GMainContext
in the public struct instead for subclasses to optionally use instead of
relying on the push/pop state to be correct.
https://bugzilla.gnome.org/show_bug.cgi?id=775970
If a sub class of GstGLContext does not create a group
then it currently crashes:
0 g_atomic_int_get (&share->refcount)
1 _context_share_group_is_shared (context->priv->sharegroup)
2 gst_gl_context_is_shared
3 _default_set_sync_gl
https://bugzilla.gnome.org/show_bug.cgi?id=774518
Calling g_main_context_push_thread and then g_main_context_invoke()
(used by gst_gl_window_send_message_async()) in the same thread will
cause the invoked function to run immediately instead of being delayed.
This had implications for the creation of the OpenGL context not waiting
until the main loop had completely started up and as a result would
sometimes deadlock in short create/destroy scenarios.
https://bugzilla.gnome.org/show_bug.cgi?id=775171
626bcccff9 removed some locks that
allowed the main loop quit to occur before the context was fully
created.
2776cef25d attempted to readd them but
missed the scop of the quit() call.
Also remove the use of g_thread_join() as that's not safe to use when
it's possible to lose the last reference from the GL thread.
https://bugzilla.gnome.org/show_bug.cgi?id=775171
It's been removed and thus compiling anything against GstGLMemoryEGL
would error with:
In file included from gstomxvideodec.c:41:0:
usr/include/gstreamer-1.0/gst/gl/egl/gstglmemoryegl.h:32:41: fatal error: gst/gl/egl/gstglcontext_egl.h: No such file or directory
#include <gst/gl/egl/gstglcontext_egl.h>
^
https://bugzilla.gnome.org/show_bug.cgi?id=774886
Otherwise, when the application reuses the same UIView, we were getting
draw notifications on the previous view/layer's which weren't valid anymore
and were referencing pointers that had been freed.
https://bugzilla.gnome.org/show_bug.cgi?id=753003
- xcb is supposedly thread-safe!
videotestsrc ! glimagesink now doesn't spuriously result in a
'call XInitThreads()' error however if anybody else is using X11,
then XInitThreads() still needs to be called and multiple glimagesink's
still need XInitThreads().
Everything still takes libX11 handles as they are compatible with the xcb
variants. Unfortunately we cannot move fully over to xcb due to GLX being
entirely based on Xlib. It's also impossible to transform a xcb_connection
to a Display which means we require X11 handles.
The spec allows the core/compatibility profiles to be used
with #version 150.
Also tighten up the tests to check for default profiles being chosen
correctly.
The change to use GST_EXPORT for symbols under Windows requires
GST_EXPORTS for internal use, and that is also needed under Autotools.
The same thing is done for gstreamer-1.0.dll in -core.
The calling convention may be deprecated, but we still need it for
OpenGL. The build issue was caused by an incorrect syntax being used for
the WINAPI (__stdcall) prototype in function pointers which was accepted
by GCC but is rejected by MSVC.
With MSVC, this gives the following warning:
warning C4305: 'function': truncation from 'double' to 'gfloat'
Apparently, MSVC does not figure out what type to use for constants
based on the assignment. This warning is very spammy, so let's try to
fix it.
At minimum, we only need to glFlush() if we are in a shared GL context
environment. Move the glFinish() to when the actual wait is requested
which may be never. Improves the throughput on older GL systems without
GL3/GLES3 and/or fence sync objects.
Using g_thread_join() in _finalize() handlers may result in a deadlock
joining the current thread when the last reference is held by a signal
handler.
e.g.:
error 'Resource deadlock avoided' during 'pthread_join (pt->system_thread, NULL)'
The backtrace looks like this:
[...]
g_thread_join ()
gst_gl_window_finalize ()
gst_gl_window_x11_finalize ()
g_object_unref ()
g_value_unset ()
g_signal_emit_valist ()
g_signal_emit ()
gst_gl_window_send_mouse_event ()
gst_gl_window_mouse_event_cb ()
g_main_dispatch ()
[..]
g_main_loop_run ()
gst_gl_window_navigation_thread ()
g_thread_proxy ()
start_thread ()
clone ()
We cannot set the x, y coordinate of the video frame at the dispmanx at
this point. We need to teach dispmanx backend to understand about
set_render_rectangle API to draw a video with other UI.
This patch keeps the current behavior which places video frame at the
center of the display if there is no set_render_rectangle call to the
dispmanx window.
https://bugzilla.gnome.org/show_bug.cgi?id=766018
e.g. passing with_gl_api=gles2 would still build the glx code but not be
linking against the libGL library which is where the glX* functions are
located and would result in a linker error.
Solved by checking for the libGL library if either opengl or glx may be
needed and then disabling the corresponding deps as requested.
The tests were broken since 91fea30, which changed glupload to return
GST_GL_UPLOAD_RECONFIGURE if the texture target in the input buffers doesn't
match the texture-target configured in the output caps.
This commit fixes that and adds more checks for the new behaviour.
Now when used with video/x-raw as input, the GLMemoryUpload method checks for
->tex_target in input GLMemory(es) and sets the output texture-target
accordingly.
Fixes video corruption with a pipeline like avfvideosrc ! video/x-raw !
glimagesink where on macos avfvideosrc pushes RECTANGLE textures but glupload
was configuring texture-target=2D as output.
Don't set the chosen texture-target into the wrong structure.
The input caps may not be writable, and in any case - the
intention was to configure the othercaps. Also, remove an
extra unref - the othercaps ref is consumed by
gst_caps_make_writable already.
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Matej Knopp <matej.knopp@gmail.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
Multiple threads may be accessing the wayland fd at the same time which
requires the use of special wayland API to deal with to ensure nobody
will steal reads and cause a stall for anyone else.
When connect to qmlglsrc, x11 event loop will be replace by qt event loop
which will cause the window cannot receive event from xserver, such as resize
https://bugzilla.gnome.org/show_bug.cgi?id=768160
Makes infinitely more sense and implementation were expecting that behaviour
anyway and would enter a resize, draw, resize, draw, ... cycle instead of only
resizing once.
There's no need for the jump to an extra thread in most cases, especially
when relying solely on a shader to render. We can use the provided
render_to_target() functions to simplify filter writing.
Facilities are given to create fbo's and attach GL memory (renderbuffers
or textures). It also keeps track of the renderable size for use with
effective use with glViewport().
Calling glUniformMatrix before the shader is bound is invalid and
would result in errors like:
GL_INVALID_OPERATION in glUniformMatrix(program not linked)
Move glUniformMatrix() to after the gst_gl_shader_use() call.
Rather than assuming something. e.g. zerocopy on iOS with GLES3 requires
the use of Luminance/Luminance Alpha formats and does not work with
Red/RG textures.
Take the used texture type from the memory instead.
Fixes conversion from multi-planar YUV formats with two components per plane
(NV12, NV21, YUY2, UYVY, GRAY16_*, etc) with Luminance Alpha input textures.
This is also needed for zerocopy decoding on iOS with GLES 3.x.
The intention was to assert if both maj and min were NULL (as there would be no
point calling the function). Instead if either maj or min were NULL, the assert
would occur.
Fix that.
Newer devices require using a different GLSL extension for accessing
external-oes textures in a shader using the texture() functions.
While the GL_OES_EGL_image_external_essl3 should supposedly be supported
on a any GLES3 android device, the extension was defined after a lot of the
older drivers were built so they will not know about it. Thus there are two
possible interpretations of which of texture[2D]() should be supported for
external-oes textures. Strict adherence to the GL_OES_EGL_image_external
extension spec which uses texture2D() or following GLES3's pattern, also
allowing texture() as a function for accessing external-oes textures
This adds another mangling pass to convert
#extension GL_OES_EGL_image_external : ...
into
#extension GL_OES_EGL_image_external_essl3 : ...
on GLES3 and when the GL_OES_EGL_image_external_essl3 extension is supported.
Only uses texture() when the GLES3 and the GL_OES_EGL_image_external_essl3
extension is supported for external-oes textures.
Uses GLES2 + texture2D() + GL_OES_EGL_image_external in all other external-oes
cases.
https://bugzilla.gnome.org/show_bug.cgi?id=766993
Otherwise we will leak GstGLContext's when adding the same context more than
once.
Fixes a regression caused by 5f9d10f603 in the
gstglcontext unit test that failed with:
Assertion 'tmp == NULL' failed
Provide a function to get the affine matrix in the meta in terms of NDC
coordinates and use as a standard opengl matrix.
Also advertise support for the affine transformation meta in the allocation
query.
Because current GstEGLImageMemory does not inherit GstGLMemory, GLUpload
allocates additional GLMemory and upload the decoded contents from the decoder
which uses EGLImage (e.g. gst-omx in RPi).
This work adds GstGLMemoryEGL to avoid this overhead. Decoders allocate
GstGLMemoryEGL and decode its contents to the EGLImage of GstGLMemoryEGL. And
GLUpload uses this memory without allocation of additional textures and blit
operations.
[Matthew Waters]: gst-indent the sources and fix a critical retreiving the egl
display from the memory.
https://bugzilla.gnome.org/show_bug.cgi?id=760916
Allows creating wrapped memories with GstGLAllocationParams.
The wrapped pointers will be set in the parameters before being passed
to the memory allocation function.
Some platforms provide an old version of GLES2/gl2.h and GLES2/gl2ext.h that
will fail when including GLES3/gl3.h due to missing typedef's.
Seen on the RPi.