The android code path is slightly different than the EGLFS one,
so I added previously a HAVE_QT_ANDROID define for use with qmake.
Here I also add it in meson, although I expect nobody will ever use
meson to build this, as it's complicated.
it is perfectly legal to use the <module/class> style of includes with Qt
and it avoids the need for having the module's include dir in the include path
Binding the vertex array to 0 will unbind everything else already.
In the previous order older versions of the Intel GL driver caused
errors to be printed for every single call when disabling the vertex
attrib arrays after binding the vertex array to 0.
This is needed as a precursor to allowing capture of IEC61937
formats. We now also need to include the channel map while converting
format info to caps so that a correct channel mask is generated for
pulsesrc's caps.
PA_INVALID_INDEX, the default value, is unfortunately !0.
Setting the volume before the stream is created will put the ring
buffer in error state. Unfortunately, that's what spice-gtk does.
If the pipeline consumes the data slower than the available network speed,
for example because sync=true, is useless to increase the blocksize and
reading in too big blocksizes can cause the connection to time out
Closes#463
In 2018 khronos changed the gl header guards. If we don't detect
this properly we would end up with plenty of symbol redifinition
(since we would be importing twice the "same" header).
Instead detect if the "newer" header was already included and if
so define the "old" define to avoid this situation
Fixes#523
Pull in video frame fields into local variables. Without this the
compiler must assume that they could've changed on every use and read
them from memory again.
This reduces the inner loop from 6 memory reads per pixels to 4, and the
number of writes stays at 3.
PulseAudio defines PA_RATE_MAX as the maximum sampling rate that it
supports. We were previously exposing a maximum rate of INT_MAX, which
is incorrect, but worked because nothing was really using a rate greater
than 384000 kHz.
While playing DSD data, we hit a case where there might be very high
sample rates (>1MHz), and pulsesink fails during stream creation with
such streams because it erroneously advertises that it supports such
rates.
Since PA_RATE_MAX is #define'd to (8*48000U), we can't just use it in
the caps string. Instead, we fix up the rate to what we actually support
whenever we use our macro caps.
This allows us to use the GstVideoOverlayComposition API and correctly
handle pre-multiplied alpha, while also only doing the alpha conversion
once instead of twice for the whole frame.
At a later point we can attach the meta to the buffer instead of
blending ourselves if downstream supports that.
https://bugzilla.gnome.org/show_bug.cgi?id=797091
The speex headers assume that WIN32 will always be defined when
building on Windows, but this is only true by default on MinGW.
Always set it explicitly.
We now have options for all plugins, so we will just disable these in
the cerbero recipe instead. These require external deps, so they won't
affect gst-build either.
This effectively (but optionally) requires libjpeg-turbo which
ships with a .pc file and is what pretty much everyone these days
uses anyway for libjpeg, so shouldn't be a problem hopefully.
https://bugzilla.gnome.org/show_bug.cgi?id=796947
Otherwise there may be no valid typedef of GLsync.
...
/usr/include/gstreamer-1.0/gst/gl/gstglfuncs.h:93:24: note: in definition of macro 'GST_GL_EXT_FUNCTION'
ret (GSTGLAPI *name) args;
^~~~
/usr/include/gstreamer-1.0/gst/gl/glprototypes/sync.h:33:23: error: 'GLsync' has not been declared
(GLsync sync))
^~~~~~
...
https://bugzilla.gnome.org/show_bug.cgi?id=796879
gst_structure_get() is declared with G_GNUC_NULL_TERMINATED, ie
__attribute__((__sentinel__)), which means gcc will generate a
warning if the last parameter passed to the function is not NULL
(where a valid NULL in this context is defined as zero with any
pointer type).
The C code callers to gst_structure_get() within gst-plugins-good
use the C NULL definition (ie ((void*)0)), which is a valid sentinel.
However gstid3v2mux.cc uses the C++ NULL definition (ie 0L), which
is not a valid sentinel without an explicit cast to a pointer type.
Upstream-Status: Pending
Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
Note from Edward Hervey: Patch from git.yoctoproject.org
musl libc generates warnings if <sys/poll.h> is included directly.
Upstream-Status: Pending
Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
Tested on linux with X11/wayland and semi-tested on Windows.
Windows crashes on item destruction however this is better than nothing.
Fix up some win32 build issues on the way with mismatched {} and
G_STMT_{START,END}
Like pngenc, automatically send an EOS message.
Example of bin:
appsrc ! jpegenc snapshot=true ! filesink location=out.jpg
This is especially useful for limited/slow hardware.
Otherwise calling gst_video_convert_sample() is a better option
(internally uses videoconvert and videoscale).
https://bugzilla.gnome.org/show_bug.cgi?id=755453
If a lot of seek method is called very quickly, sometimes data reading
and do_request occurs while seek flush event is occurring and error
occurs because retry_count
reaches to the max. Thus, reset retry_count if flush occurs after
do_request and read_buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=790199
Some cameras fail to send an end-of-image marker (EOI)
and can't be properly decoded by either JPEG or libjpeg.
This commit parses the frame, making sure it has an EOI.
If there isn't one, the EOI gets added to the buffer.
A similar fixup is done in the rtpjpegdepay element,
and it makes sense to do it in jpegdec as well.
Signed-off-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
https://bugzilla.gnome.org/show_bug.cgi?id=791988
This reverts commit 47fd4d391e.
This patch is incorrect. It doesn't actually compile, and causes a crash
because the viv-fb window implementation needs a native EGL handle
to pass to fbCreateWindow, but the GstGLDisplayEGL handleis actually
an EGLDisplay now (and gets cast to the wrong type)
This is a regression introduced by "03db374 - souphttpsrc: retry
request on early termination from the server"
The problem was that when seeking back to 0, we would not end up calling
add_range_header() which in addition to adding range headers *ALSO* sets
the read_position to the requested one.
This would result in a wide variety of later failures, like reading
again and again instead of stopping properly.
This simplifies the code a lot without any functional changes apart from
not closing the display connection. Closing the display connection is
not safe to do as it is shared between all other code in the same
process and no reference counting or anything happens at the platform
layer.
Move the package defines for GST_PLUGIN_DEFINE from the
command line into the source file to avoid quoting issues
(-DPACKAGE_NAME="foo" means the quotes won't actually make
it to the compiler and then it no longer gets a string constant).
1. Propagate the GstGLDisplay we create
2. Add the created GstGLContext to the propagated GstGLDisplay
Otherwise with multi-branch GL pipelines involving gtkglsink, things
will fall apart and errors will be genarated somewhere.
Except for gst/gl/gstglfuncs.h
It is up to the client app to include these headers.
It is coherent with the fact that gstreamer-gl.pc does not
require any egl.pc/gles.pc. I.e. it is the responsability
of the app to search these headers within its build setup.
For example gstreamer-vaapi includes explicitly EGL/egl.h
and search for it in its configure.ac.
For example with this patch, if an app includes the headers
gst/gl/egl/gstglcontext_egl.h
gst/gl/egl/gstgldisplay_egl.h
gst/gl/egl/gstglmemoryegl.h
it will *no longer* automatically include EGL/egl.h and GLES2/gl2.h.
Which is good because the app might want to use the gstgl api only
without the need to bother about gl headers.
Also added a test: cd tests/check && make libs/gstglheaders.check
https://bugzilla.gnome.org/show_bug.cgi?id=784779
This is useful for autoplay for example. With autoplay, it is necessary to
wait until the scene graph is fully set up. This signal is emitted once the
QML item node is ready. So, inside a connected slot, the pipeline's state
can be set to PLAYING to automatically start playback as soon as the QML
script is loaded.
https://bugzilla.gnome.org/show_bug.cgi?id=786246
This fixes a memory leak. When dropframe-threshold has been set,
libvpx may output less frames than the input ones, which causes
some GstVideoCodecFrames to queue up in GstVideoEncoder's internal
frame queue with no chance of ever being all released. And because
the frames keep references to the input buffers, the input buffer
pool keeps allocating new buffers and memory usage grows very fast.
For example the following pipeline's memory usage grows at a rate
of about 1GB per minute!
videotestsrc ! capsfilter caps=video/x-raw,width=1920,height=1080,framerate=30/1,format=I420 ! \
vp8enc target-bitrate=1000000 end-usage=cbr dropframe-threshold=95 ! fakesink
https://bugzilla.gnome.org/show_bug.cgi?id=783086
Instead of just sending a sticky event with them downstream. This allows
getting the HTTP headers easily in the application, and especially also
on errors.
QML can destroy the video widget at any time, leaving
us with a dangling pointer. Use a lock and a proxy
object to cope with that, and block in the widget
destructor if there are ongoing calls into the widget.