e.g when wrapping a data pointer we don't want to map/unmap off the end of
pointer with the alignment bytes.
Instead track that information separately as maxsize is used for mapping by
GstMemory and thus represents a size without any alignment padding bytes.
Adding the support for the two other OpenCV linear filters to smooth
images. The new API does support spatial sigma in the bilateral filter,
hence bringing that property back.
Adding reference to new documentation.
While encoding the frame in ASCII mode, per component four bytes are needed
and after every 20 bytes, a \n will be added. So the calculation should be
size = size * (4 + 1 / 20). This should exclude the header being written.
Since header is also being included in the calculations, memory mishandlings
are happening.
https://bugzilla.gnome.org/show_bug.cgi?id=759520
Requires the usage of GstGLVideoAllocationParams however any user can set their
own parameters along with an allocator which will be used to allocate the
correct memory type.
- Create GstGLVideoAllocationParams which is a GstGLAllocationParams subclass.
- Make it possible to allocate glmemory objects directly if no frills are
needed.
This is made possible by a subclassable GstGLAllocationParams that holds
the allocation parameters
Every allocation would now go through gst_gl_base_memory_alloc with the
allocation parameters now being specified in a single struct to allow
extension by different allocators.
The OpenCV cvSmooth function is deprecated [0] and the documentation
recommends to use GaussianBlur (). This makes the spatial property go
unused. Marking it as deprecated, making it non-functional and will remove
in the next cycle.
[0] http://docs.opencv.org/2.4/modules/imgproc/doc/filtering.html
The imported memory has already been allocated, passing allocation
parameters with alignment confuses the memory which endup with a
size different from maxsize and lead to overrun when the memory
is being copied.
gst_mpdparser_parse_utctiming_node does not validate the parsed values completely. The following scenarios are incorrectly accepted:
- elements with no schemeIdUri property should be rejected
- elements with unrecognized UTCTiming scheme should be rejected
- elements with empty values should be rejected
The last one triggers a division by 0 in gst_dash_demux_poll_clock_drift:
clock_drift->selected_url = clock_drift->selected_url % g_strv_length (urls);
because it urls is a valid pointer to an empty array.
https://bugzilla.gnome.org/show_bug.cgi?id=759547
Prefer GLMemory over sysmem. Also now when pushing GLMemory we push the
original formats (UYVY in OSX, BGRA in iOS) and leave it to downstream to
convert.
It was added back in the day to make texture sharing work by default with
glimagesink inside playbin. These days glimagesink accepts (and converts) YUV
internally so it's no longer needed.
Switch to using IOSurface instead of CVOpenGLTextureCache on OSX. The latter can't be
used anymore to do YUV => RGB with opengl3 on El Capitan as GL_YCBCR_422_APPLE
has been removed from the opengl3 driver. Also switch to NV12 from UYVY, which
was the only YUV format supported by CVOpenGLTextureCache.
First of a few commits to stop using CVOpenGLTextureCache on OSX and use
IOSurfaces directly instead. CVOpenGLTextureCache hasn't been updated for OpenGL
3 which is why texture sharing is currently disabled on OSX.
We don't directly link to GL in the element, though we use GL headers.
For this reason we need to include the proper GL headers path. This
prevent this element from using a different GL header then libgstgl.
GCC automatically disable redundance warnings for system headers. As
soon as we start using a non-system installed mesa, we would start
having issues. The test for both wasn't setting any flags, so it would
work but then fail at runtime.
This is being fixed by disabling in the code (where needed only) that
GCC warning. The test is also fixed to avoid the false positive we had.