Prefer GLMemory over sysmem. Also now when pushing GLMemory we push the
original formats (UYVY in OSX, BGRA in iOS) and leave it to downstream to
convert.
First of a few commits to stop using CVOpenGLTextureCache on OSX and use
IOSurfaces directly instead. CVOpenGLTextureCache hasn't been updated for OpenGL
3 which is why texture sharing is currently disabled on OSX.
When AVFoundation indicates a supported frame rate range, add it to
the caps. This is important for devices such as the iPhone 6, which
indicate a single AVFrameRateRange of 2fps - 60fps.
https://bugzilla.gnome.org/show_bug.cgi?id=751048
Unless stopRequest is set, we should unlock conditionally -- otherwise,
the 'create:' method can wake up to an empty buffer queue
and pull a nil buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=748054
Otherwise we might set bogus values or GST_CLOCK_TIME_NONE.
Also make sure to reset the caps field to NULL after unreffing
the caps to prevent accidential use afterwards, and unref any
old caps before we remember new caps.
Use YUV instead of RGB textures, then convert using the new apple specific
shader in GstGLColorConvert. Also use GLMemory directly instead of using the
GL upload meta, avoiding an extra texture copy we used to have before.
When doing texture sharing we don't need to call CVPixelBufferLockBaseAddress to
map the buffer in CPU. This cuts about 10% relative cpu time from a vtdec !
glimagesink pipeline.
Use AVF provided timings to timestamp output buffers. Use the running time at
the time the first buffer is produced to base timestamps on. Report 1-frame
latency based on the negotiated framerate instead of hardcoding 4ms latency.
We will run into an assertion in set_caps() if we try to change
caps while the source is already running. Don't try to find new
caps in GstBaseSrc::negotiate() to prevent caps changes.
Handle stride alignment through the use of the video meta API. The
code is based on the corevideobuffer implementation.
If the video meta API is not supported and the underlying buffer
contains padding, the core media buffer is copied to a system memory
buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=727885
AVDeviceFormat and AVFrameRateRange are available in iOS since 7.0
so we need a more dynamic approach to support compilation with older
SDK's. We use a NSObject to avoid referencing those types and key-value
coding or preformSelector to access properties.
On OSX, setting the pixel format on the output reset the capture device
to its native resolution, so we need to update the caps if the output
frame size has changed before a proper solution is found.