We don't do any blending by ourselves since a while now.
Note that this is a regression in "supported" formats: previously
ARGB64 was supported, for example, but in practice it caused blending to
not take place at all.
Without a qmlglsink, we need to retrieve the window system display
ourselves rather than relying solely on qmlglsink to have priority on
the choice of display.
deleting a QOpenGLFrameBufferObject needs to occur on the same thread it
was created on in order to actually free the relevant resources
immediately. Otherwise, they will be queued for deletion and not freed
until the associated QOpenGLContext is destroyed.
By explictly including QtGui/qopengl.h we force the code path that
defines GLsync in the Qt-specific way. Without that, some platforms
failed to compile the qmlgl plugin, since neither Qt nor gstreamer
defined GLsync then, leading to e.g.:
```
make[4]: Entering directory '/.../gst-plugins-good-1.16.1/ext/qt'
CXX libgstqmlgl_la-qtitem.lo
In file included from gstqtgl.h:32,
from qtitem.h:27,
from qtitem.cc:28:
/.../usr/include/gstreamer-1.0/
gst/gl/gstglfuncs.h:93:17: error: expected identifier before ‘*’ token
ret (GSTGLAPI *name) args;
^
/.../usr/include/gstreamer-1.0/
gst/gl/glprototypes/sync.h:27:1: note: in expansion of macro
‘GST_GL_EXT_FUNCTION’
GST_GL_EXT_FUNCTION (GLsync, FenceSync,
^~~~~~~~~~~~~~~~~~~
```
Disable session sharing and cookie jar when cookies property is set.
The cookie jar actually replaces or removes any existing Cookie header
set on the message, so the cookies property was effectively being
ignored. There doesn't appear to be a way to inject the cookies into the
jar without having to specify matching domains etc., so it's not
possible to simulate the old behaviour of unconditionally sending the
cookies with all messages, besides simply disabling the cookie jar.
In Google webrtc, the setting VP8E_SET_STATIC_THRESHOLD is set to 1
(except when the content is known to be static very often in which
case it is set to 100, i.e. when sharing screen with Google Hangouts).
The cpu usage drops a lot when using 1 for above setting because it
allows the encoder to skip static/low content blocks. The current
0 default value uses too much cpu and confuses the user regarding
the cpu usage expectations. User expects vp8enc to use low cpu by
default.
Documentation of VP8E_SET_STATIC_THRESHOLD:
https://github.com/webmproject/libvpx/blob/master/vpx/vp8cx.h#L188
chromium/webrtc:
b484ec0082/modules/video_coding/codecs/vp8/libvpx_vp8_encoder.cc (822)Closes#58
If Mesa is built without X11 headers, building against Mesa EGL headers
requires a dependency on egl.pc, to define MESA_EGL_NO_X11_HEADERS.
This fixes a build error when compiling ext/qt/gstqtglutility.cc:
In file included from /usr/include/EGL/egl.h:39,
from /usr/include/gstreamer-1.0/gst/gl/egl/gstegl.h:44,
from ../gst-plugins-good-1.16.1/ext/qt/gstqtglutility.cc:43:
/usr/include/EGL/eglplatform.h:124:10: fatal error: X11/Xlib.h: No such file or directory
By passing `NULL` to `g_signal_new` instead of a marshaller, GLib will
actually internally optimize the signal (if the marshaller is available
in GLib itself) by also setting the valist marshaller. This makes the
signal emission a bit more performant than the regular marshalling,
which still needs to box into `GValue` and call libffi in case of a
generic marshaller.
Note that for custom marshallers, one would use
`g_signal_set_va_marshaller()` with the valist marshaller instead.
If the the height is not a multiple of the macro block size then the memory
of the last line is reused for all extra lines. This is no problem if the
last line is duplicated properly. However, if the extra lines are not
initialized properly during encoding, then the last visible line is
overwritten with undefined data.
Use a extra buffer to avoid this problem.