First of a few commits to stop using CVOpenGLTextureCache on OSX and use
IOSurfaces directly instead. CVOpenGLTextureCache hasn't been updated for OpenGL
3 which is why texture sharing is currently disabled on OSX.
rename gst-launch --> gst-launch-1.0
replace old elements with new elements(ffmpegcolorspace -> videoconvert, ffenc_** -> avenc_**)
fix caps in examples
https://bugzilla.gnome.org/show_bug.cgi?id=759432
It will fail and cause the sink to crash. Instead wait until the window is
visible again before checking if the swapchain really has to be recreated.
https://bugzilla.gnome.org/show_bug.cgi?id=741608
The video decoders tried calling gst_buffer_add_*meta() on non-writable
buffer resulting in warnings of this kind:
gstamcvideodec.c:921 (_gl_sync_render_unlocked): WARNING: amcvideodec
Failed to create the transformation meta for the gl_sync 0xabc03848
buffer 0xabb01b40 (0)
https://bugzilla.gnome.org/show_bug.cgi?id=758694
Some devices only ever keep one buffer available in the GL queue resulting in
multiple calls to release_output_buffer only causing one frame to be rendered.
If there is a queue after amcvideodec (even playsink's small one), then
multiple buffers are pushed but only a small fraction of them are actually
rendered on time. The rest will either render some number of frames ahead of
where they are meant to be or timeout waiting for a frame that's already been
rendered.
Solved by moving the release_output_buffer into the sync_meta the is pushed
downstream. When downstream renders, the custom sync implementation attempts
to release the current buffer (if not already released) and render. Once the
frame has been rendered to the screen, the next frame is released and is
hopefully available by the time the next frame is to be rendered.
This fixes a perceived frame jitter in the output.
Year 12: I still don't understand how negotiation works.
Apparently gst_pad_query_caps doesn't do what I thought it did. To get the
actual caps that can flow through vtdec:src we must call gst_pad_peer_query_caps
with the template caps as filter.
Fixes negotiation with stuff that doesn't understand GLMemory (hello videoscale).
This provides a performance and power usage improvement by removing
the texture copy from an OES texture to 2D texture.
The flow is as follows
1. Generate the output buffer with the required sync meta with the incrementing
push counter and OES GL memory
1.1 release_output_buffer (buf, render=true) and push downstream
2. Downstream waits for on the sync meta (timed wait) or drops the frame (no wait)
2.1 Timed wait for the frame number to reach the number of frame callbacks fired
2.2 Unconditionally update the image when the wait completes (success or fail).
Sets the affine transformation matrix meta on the buffer.
3. Downstream renders as usual.
At *some* point through this the on_frame_callback may or may not fire. If it
does fire, we can finish waiting early and render. Otherwise we have to
wait for a timeout to occur which may cause more buffers to be pused into the
internal GL queue which siginificantly decreases the chances of the
on_frame_callback to fire again. This is because the frame callback only occurs
when the internal GL queue changes state from empty to non-empty.
Because there is no way to reliably correlate between the number of buffers
pushed and the number of frame callbacks received, there are a number of
workarounds in place.
1. We self-increment the ready counter when it falls behind the push counter
2. Time based waits as the frame callback may not be fired for a certain frame.
3. It is assumed that the device can render at speed or performs some QoS of
the interal GL queue (which may not match the GStreamer QoS).
It holds that we call SurfaceTexture::updateTexImage for each buffer pushed
downstream however there's no guarentee that updateTexImage will result in
the exact next frame (it could skip or duplicate) so synchronization is not
guaranteed to be accurate although it seems to be close enough to be unable
to discern visually. This has not changed from before this patch. The current
requirement for synchronization is that updateTexImage is called at the point in
time when the buffers is to be rendered.
https://bugzilla.gnome.org/show_bug.cgi?id=757285
Rework negotiation implementing GstVideoDecoder::negotiate. Make it possible to
switch texture sharing on and off at runtime. Useful to (eventually) turn
texture sharing on in pipelines where glimagesink is linked only after
decoding has already started (for example OWR).
Improve decode error handling by avoiding calling into GstVideoDecoder from the
VT decode callback. This removes contention on the GST_VIDEO_DECODER_STREAM_LOCK
which used to make the decode callback slow enough for VT to start dropping lots
of frames once the first frame was dropped.
Otherwise, gst_vtenc_negotiate_profile_and_level will double-release as
it checks for profile_level != NULL. This caused crashes when the
vtenc instance is stopped and then restarted.
https://bugzilla.gnome.org/show_bug.cgi?id=757935
Use gst_gl_sized_gl_format_from_gl_format_type to get the format passed to
CVOpenGLESTextureCacheCreateTextureFromImage. Before this change extracting the
second texture from the pixel buffer was failing on ios 9.1.
No need to use G_GINT64_FORMAT for potentially negative values of
GstClockTimeDiff. Since 1.6 these can be handled with GST_STIME_ARGS.
Plus it creates more readable values in the logs.
https://bugzilla.gnome.org/show_bug.cgi?id=757480
Solved with a simple shader templating mechanism and string replacements
of the necessary sampler types/texture accesses and texture coordinate
mangling for rectangular and external-oes textures.
Add the various tokens/strings for the differnet texture types (2D, rect, oes)
Changes the GLmemory api to include the GstGLTextureTarget in all relevant
functions.
Update the relevant caps/templates for 2D only textures.
Otherwise we're going to return times starting at 0 again after shutting down
an element for a specific input/output and then using it again later.
https://bugzilla.gnome.org/show_bug.cgi?id=755426