Clear error as soon as we determine that the download failed,
otherwise there are code paths where we might return without
clearing it ever, which would leak the GError then. Also, we
can pass a NULL GError pointer to _fetch_uri(), so just do that
instead of passing one that we're going to just free again
right away anyway.
This upload method detect and optimize uploads of DMABuf memory. This is
done by creating and caching EGLImages wrapper around DMABuf. The
EGLImages are then binded to a texture which get converter using
standard shader.
Example pipeline:
GST_GL_PLATFORM=egl \
gst-launch-1.0 v4l2src device=/dev/video1 io-mode=4 ! \
video/x-raw,format=NV12 ! glimagesink
https://bugzilla.gnome.org/show_bug.cgi?id=743345
Maps GstVideoFormats to suitable DRM fourccs which work with
glcolorconvert, using gst_gl_memory_alloc(). We require mostly
only 4 formats to be supported by the driver. We require DRM
equivalent to RGB16, RGBA, R8 and RG88. This way it's compatible with
DesktopGL, since GL_TEXTURE_2D is used and limit driver requirements.
With this we can virtually support all formats the glcolorconvert
supports.
https://bugzilla.gnome.org/show_bug.cgi?id=743345
Adds more meaningful error than
"Failed to convert multiview video buffer", which is always used
when prepare_next_buffer() fails in gst_glimage_sink_prepare().
https://bugzilla.gnome.org/show_bug.cgi?id=743345
Update opencvtextoverlay to inherit from GstOpencvVideoFilter instead of
from GstElement. This means less code and more uniformity with other OpenCV
elements. The chain/transform function is now a third of the size than
before.
Update pyramidsegment to inherit from GstOpencvVideoFilter instead of from
GstElement. This means less code and more uniformity with other OpenCV
elements.
Update pyramidsegment to inherit from GstOpencvVideoFilter instead of from
GstElement. This means less code and more uniformity with other OpenCV
elements.
When the mode of decklinkvideosink is set to "auto", the sink claims to
support the full set of caps that it can support for all modes. Then, every
time new caps are set, the sink will automatically find the correct mode for
these caps and set it.
Caveat: We have no way to know whether a specific mode will actually work for
your hardware. Therefore, if you try sending 4K video to a 1080 screen, it
will silently fail, we have no way to know that in advance. Manually setting
that mode at least gave the user a way to double-check what they are doing.
https://bugzilla.gnome.org/show_bug.cgi?id=759600
Update motioncells to inherit from GstOpencvVideoFilter instead of from
GstElement. This means less code and more uniformity with other OpenCV
elements.
Otherwise qtkitvideosrc fails to build on OSX 10.10.4
because QTKit has been deprecated since OS X 10.9.
Also set -mmacosx-version-min=10.8 in front to allow
the user or cerbero to override the version.
https://bugzilla.gnome.org/show_bug.cgi?id=745564
Add gst_gl_memory_allocator_get_default to get the default allocator based on
the opengl version. Allows us to stop hardcoding the PBO allocator which isn't
supported on gles2.
Fixes GL upload on iOS9 among other things.
Performing any GL function marshalling off the GL thread with glimagesink's
render lock is prone to deadlocks between the GL thread and the non-GL thread.
What can happen is this:
1. non-GL thread attempts to function marshal to the GL thread.
2. while 1 is happening, the winsys gives an event (say resize)
3. This calls back into glimagesink which taks the render lock.
4. As the GL function marshalling is attempting to run on the GL
and already has glimagesink's render lock locked. This deadlocks
as the threads are waiting for each other.
Update edgedetect to inherit from GstOpencvVideoFilter instead of from
GstElement. This means less code and more uniformity with other OpenCV
elements.
e.g when wrapping a data pointer we don't want to map/unmap off the end of
pointer with the alignment bytes.
Instead track that information separately as maxsize is used for mapping by
GstMemory and thus represents a size without any alignment padding bytes.
Adding the support for the two other OpenCV linear filters to smooth
images. The new API does support spatial sigma in the bilateral filter,
hence bringing that property back.
Adding reference to new documentation.
While encoding the frame in ASCII mode, per component four bytes are needed
and after every 20 bytes, a \n will be added. So the calculation should be
size = size * (4 + 1 / 20). This should exclude the header being written.
Since header is also being included in the calculations, memory mishandlings
are happening.
https://bugzilla.gnome.org/show_bug.cgi?id=759520
Requires the usage of GstGLVideoAllocationParams however any user can set their
own parameters along with an allocator which will be used to allocate the
correct memory type.
- Create GstGLVideoAllocationParams which is a GstGLAllocationParams subclass.
- Make it possible to allocate glmemory objects directly if no frills are
needed.
This is made possible by a subclassable GstGLAllocationParams that holds
the allocation parameters
Every allocation would now go through gst_gl_base_memory_alloc with the
allocation parameters now being specified in a single struct to allow
extension by different allocators.
The OpenCV cvSmooth function is deprecated [0] and the documentation
recommends to use GaussianBlur (). This makes the spatial property go
unused. Marking it as deprecated, making it non-functional and will remove
in the next cycle.
[0] http://docs.opencv.org/2.4/modules/imgproc/doc/filtering.html
The imported memory has already been allocated, passing allocation
parameters with alignment confuses the memory which endup with a
size different from maxsize and lead to overrun when the memory
is being copied.