Otherwise qtkitvideosrc fails to build on OSX 10.10.4
because QTKit has been deprecated since OS X 10.9.
Also set -mmacosx-version-min=10.8 in front to allow
the user or cerbero to override the version.
https://bugzilla.gnome.org/show_bug.cgi?id=745564
Add gst_gl_memory_allocator_get_default to get the default allocator based on
the opengl version. Allows us to stop hardcoding the PBO allocator which isn't
supported on gles2.
Fixes GL upload on iOS9 among other things.
Prefer GLMemory over sysmem. Also now when pushing GLMemory we push the
original formats (UYVY in OSX, BGRA in iOS) and leave it to downstream to
convert.
It was added back in the day to make texture sharing work by default with
glimagesink inside playbin. These days glimagesink accepts (and converts) YUV
internally so it's no longer needed.
Switch to using IOSurface instead of CVOpenGLTextureCache on OSX. The latter can't be
used anymore to do YUV => RGB with opengl3 on El Capitan as GL_YCBCR_422_APPLE
has been removed from the opengl3 driver. Also switch to NV12 from UYVY, which
was the only YUV format supported by CVOpenGLTextureCache.
First of a few commits to stop using CVOpenGLTextureCache on OSX and use
IOSurfaces directly instead. CVOpenGLTextureCache hasn't been updated for OpenGL
3 which is why texture sharing is currently disabled on OSX.
rename gst-launch --> gst-launch-1.0
replace old elements with new elements(ffmpegcolorspace -> videoconvert, ffenc_** -> avenc_**)
fix caps in examples
https://bugzilla.gnome.org/show_bug.cgi?id=759432
Year 12: I still don't understand how negotiation works.
Apparently gst_pad_query_caps doesn't do what I thought it did. To get the
actual caps that can flow through vtdec:src we must call gst_pad_peer_query_caps
with the template caps as filter.
Fixes negotiation with stuff that doesn't understand GLMemory (hello videoscale).
Rework negotiation implementing GstVideoDecoder::negotiate. Make it possible to
switch texture sharing on and off at runtime. Useful to (eventually) turn
texture sharing on in pipelines where glimagesink is linked only after
decoding has already started (for example OWR).
Improve decode error handling by avoiding calling into GstVideoDecoder from the
VT decode callback. This removes contention on the GST_VIDEO_DECODER_STREAM_LOCK
which used to make the decode callback slow enough for VT to start dropping lots
of frames once the first frame was dropped.
Otherwise, gst_vtenc_negotiate_profile_and_level will double-release as
it checks for profile_level != NULL. This caused crashes when the
vtenc instance is stopped and then restarted.
https://bugzilla.gnome.org/show_bug.cgi?id=757935
Use gst_gl_sized_gl_format_from_gl_format_type to get the format passed to
CVOpenGLESTextureCacheCreateTextureFromImage. Before this change extracting the
second texture from the pixel buffer was failing on ios 9.1.
Solved with a simple shader templating mechanism and string replacements
of the necessary sampler types/texture accesses and texture coordinate
mangling for rectangular and external-oes textures.
Add the various tokens/strings for the differnet texture types (2D, rect, oes)
Changes the GLmemory api to include the GstGLTextureTarget in all relevant
functions.
Update the relevant caps/templates for 2D only textures.
GstVideoDecoder has its own logic for detecting when to reconfigure
which ultimately calls decide_allocation and results in a new
texture cache that has not been configured from our reconfigure check.
https://bugzilla.gnome.org/show_bug.cgi?id=755156
Fixes playback to GL memory on iOS, where the colours are messed
up by passing Luminance/LuminanceAlpha textures where
color convert expects R/RG textures.
https://bugzilla.gnome.org/show_bug.cgi?id=754504
The block that is dispatched async to the main thread assumed the
wrapping GstAvSampleVideoSink to be alive. However, at the time of
the block execution the GstObject instance that is deferenced to access
the CA layer might already be freed, which caused occasional crashes.
Instead, we now only pass the CoreAnimation layer that needs to be
released to the block. We use __block to make sure the block is not
increasing the refcount of the CA layer again on its own.
https://bugzilla.gnome.org/show_bug.cgi?id=753081
CMBlockBuffer offers a model similar to GstBuffer, as it can
consist of multiple non-consecutive memory blocks.
Prior to this change, what we were doing was:
1) Incorrect:
CMBlockBufferCreateWithMemoryBlock does not copy the data,
but we gst_buffer_unmap'd right away.
2) Inefficient:
If the GstBuffer consisted of non-contiguous memory blocks,
gst_buffer_map resulted in malloc / memcpy.
With this change, we construct a CMBlockBuffer out of individual mapped
GstMemory objects. CMBlockBuffer is made to retain the GstMemory
objects (through the use of CMBlockBufferCustomBlockSource), so the
original GstBuffer can be unref'd.
https://bugzilla.gnome.org/show_bug.cgi?id=751241
CMBlockBufferGetDataLength would return the entire data length, while
size of individual blocks can be smaller. Iterate over the block buffer
and add the individual (possibly non-contiguous) memory blocks.
https://bugzilla.gnome.org/show_bug.cgi?id=751071
When AVFoundation indicates a supported frame rate range, add it to
the caps. This is important for devices such as the iPhone 6, which
indicate a single AVFrameRateRange of 2fps - 60fps.
https://bugzilla.gnome.org/show_bug.cgi?id=751048
Even when we fail to encode frame, we should still enqueue it so
it could be passed into handle_frame (with output_buffer == NULL).
Otherwise, we risk GstVideoEncoder's queue of frames growing unbounded.
Note: We're slightly changing the renegotiation code to accommodate for
frames without output buffers, but this commit takes no ownership over
the way negotiation is being done.
https://bugzilla.gnome.org/show_bug.cgi?id=750669
VTCompressionSessionEncodeFrame retains the CVPixelBuffer during
encoding, and will release it as soon as it can (e.g. before it even
calls our callback). This means we can safely release input buffer
at this point, possibly allowing the system to reuse it sooner.
https://bugzilla.gnome.org/show_bug.cgi?id=750671
Copying arbitrary metas is going to cause problems and this should really be
handled by the base class. It overrides most other things already anyway,
including timestamp and duration. Those are just set here now so we can
insert the frame sorted into the queue.
https://bugzilla.gnome.org/show_bug.cgi?id=748922
This decoder does not work if width and height field are not set
in the sinkpad caps. Let's make this explicit by adding them to
the template caps.
https://bugzilla.gnome.org/show_bug.cgi?id=749655
It is incorrect to modify the frame properties after passing them, since
VTCompressionSessionEncodeFrame takes reference and we have no control
over when it's being used.
In fact, the code can be simplified. We just preallocate the frame
properties for keyframe requests, and pass NULL otherwise.
https://bugzilla.gnome.org/show_bug.cgi?id=748467
Unless stopRequest is set, we should unlock conditionally -- otherwise,
the 'create:' method can wake up to an empty buffer queue
and pull a nil buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=748054
while having the default vtdec at secondary rank. This allows decodebin/playbin
to prefer the hardware based decoders, and if that fails to initialize because
hardware resources are busy to fall back to e.g. the libav based h264 decoder
instead of the software based vtdec (which is slower), and only fall back to
the software based vtdec if there is no higher ranked decoder available.
Using requestMediaDataWhenReadyOnQueue the layer will execute a block
when it would like more frames. Using this we can provide the current
frame and avoid needlessly filling the layer's buffer queue causing
older frames to be displayed when under resource pressure.
Otherwise we might set bogus values or GST_CLOCK_TIME_NONE.
Also make sure to reset the caps field to NULL after unreffing
the caps to prevent accidential use afterwards, and unref any
old caps before we remember new caps.
Use YUV instead of RGB textures, then convert using the new apple specific
shader in GstGLColorConvert. Also use GLMemory directly instead of using the
GL upload meta, avoiding an extra texture copy we used to have before.
When doing texture sharing we don't need to call CVPixelBufferLockBaseAddress to
map the buffer in CPU. This cuts about 10% relative cpu time from a vtdec !
glimagesink pipeline.
... and hope that everything will be fine. This shouldn't really happen but
previously happened during shutdown. It should be fixed in videoencoder now,
but better be on the safe side here.
Use AVF provided timings to timestamp output buffers. Use the running time at
the time the first buffer is produced to base timestamps on. Report 1-frame
latency based on the negotiated framerate instead of hardcoding 4ms latency.
The property is in kbit/s and we store it in bit/s, so just multiply and
divide by 1000. No need to put a factor of 8 in there.
kVTCompressionPropertyKey_AverageBitRate is also in bit/s according to
its documentation.
We will run into an assertion in set_caps() if we try to change
caps while the source is already running. Don't try to find new
caps in GstBaseSrc::negotiate() to prevent caps changes.
The object lock only protects the session, as we modify
the session from other threads when the bitrate property
is changed. Don't hold it much longer than for session
related things.
And we need to release the video decoder stream lock before
enqueueing a frames. It might wait for our callback to dequeue
a frame from another thread, which will then take the stream
lock too and deadlock.
It is not required on OSX apparently and was only added in 10.9.6 there.
Calculating the correct level from the configuration is not trivial, so let's
just not set a level at all here.
iOS has special stride requirements that we don't know yet, so copy
input buffers into buffers allocated by iOS for now.
Later we should check the stride and probably provide a buffer pool for these
buffers so upstream can directly write in there.
gst_pad_get_pad_template_caps() returns a reference which is unreferenced,
so creating a copy using gst_caps_copy() results in a reference leak. Also
the caps are pushed as an event downstream, but this doesn't consume the
caps so it must still be unreferenced.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=734534
The pixel buffer release callback is called if the void *
dataPtr given to the CVPixelBufferCreateWithPlanarBytes
is not NULL.
According to the documentation dataPtr is supposed to be a
"plane description block" but no specific type is given.
https://bugzilla.gnome.org/show_bug.cgi?id=711847