First, use top_builddir, otherwise it fails in out-of-source builds.
Second, link to the libtool archive directly to let make understand
the dependency.
0x04 signifies a MPEG elementary stream but according to RP2008, 0x10 should
be used for a h264 byte-stream. This also fixes compatibility of our files
with ffmpeg.
The scene graph can be initialized when the we receive window handle change
notification and so we will not receive a scenegraph initialization
notification. Initialize ourself in this case.
https://bugzilla.gnome.org/show_bug.cgi?id=758337
Some devices only ever keep one buffer available in the GL queue resulting in
multiple calls to release_output_buffer only causing one frame to be rendered.
If there is a queue after amcvideodec (even playsink's small one), then
multiple buffers are pushed but only a small fraction of them are actually
rendered on time. The rest will either render some number of frames ahead of
where they are meant to be or timeout waiting for a frame that's already been
rendered.
Solved by moving the release_output_buffer into the sync_meta the is pushed
downstream. When downstream renders, the custom sync implementation attempts
to release the current buffer (if not already released) and render. Once the
frame has been rendered to the screen, the next frame is released and is
hopefully available by the time the next frame is to be rendered.
This fixes a perceived frame jitter in the output.
gstglsyncmeta.c -fPIC -DPIC -o .libs/libgstgl_1.0_la-gstglsyncmeta.o
gstglsyncmeta.c: In function 'gst_buffer_add_gl_sync_meta':
gstglsyncmeta.c:131:1: error: control reaches end of non-void function [-Werror=return-type]
}
^
Year 12: I still don't understand how negotiation works.
Apparently gst_pad_query_caps doesn't do what I thought it did. To get the
actual caps that can flow through vtdec:src we must call gst_pad_peer_query_caps
with the template caps as filter.
Fixes negotiation with stuff that doesn't understand GLMemory (hello videoscale).
This provides a performance and power usage improvement by removing
the texture copy from an OES texture to 2D texture.
The flow is as follows
1. Generate the output buffer with the required sync meta with the incrementing
push counter and OES GL memory
1.1 release_output_buffer (buf, render=true) and push downstream
2. Downstream waits for on the sync meta (timed wait) or drops the frame (no wait)
2.1 Timed wait for the frame number to reach the number of frame callbacks fired
2.2 Unconditionally update the image when the wait completes (success or fail).
Sets the affine transformation matrix meta on the buffer.
3. Downstream renders as usual.
At *some* point through this the on_frame_callback may or may not fire. If it
does fire, we can finish waiting early and render. Otherwise we have to
wait for a timeout to occur which may cause more buffers to be pused into the
internal GL queue which siginificantly decreases the chances of the
on_frame_callback to fire again. This is because the frame callback only occurs
when the internal GL queue changes state from empty to non-empty.
Because there is no way to reliably correlate between the number of buffers
pushed and the number of frame callbacks received, there are a number of
workarounds in place.
1. We self-increment the ready counter when it falls behind the push counter
2. Time based waits as the frame callback may not be fired for a certain frame.
3. It is assumed that the device can render at speed or performs some QoS of
the interal GL queue (which may not match the GStreamer QoS).
It holds that we call SurfaceTexture::updateTexImage for each buffer pushed
downstream however there's no guarentee that updateTexImage will result in
the exact next frame (it could skip or duplicate) so synchronization is not
guaranteed to be accurate although it seems to be close enough to be unable
to discern visually. This has not changed from before this patch. The current
requirement for synchronization is that updateTexImage is called at the point in
time when the buffers is to be rendered.
https://bugzilla.gnome.org/show_bug.cgi?id=757285
there could be other ways/requirements for synchronising two GPU command
streams (whether GL or platform specific).
e.g. glfencesync/eglwaitnative/cond/etc
Rework negotiation implementing GstVideoDecoder::negotiate. Make it possible to
switch texture sharing on and off at runtime. Useful to (eventually) turn
texture sharing on in pipelines where glimagesink is linked only after
decoding has already started (for example OWR).
Improve decode error handling by avoiding calling into GstVideoDecoder from the
VT decode callback. This removes contention on the GST_VIDEO_DECODER_STREAM_LOCK
which used to make the decode callback slow enough for VT to start dropping lots
of frames once the first frame was dropped.
This no longer does anything, and it was marked as CONSTRUCT_ONLY
which means someone would really have to go out of their way to
be able to set this, which would only be done in very custom
scenarios, if ever, and those will likely target a specific
version of GStreamer then, so probably not much point keeping
it deprecated for a while before removing it.
If packet->payload_unit_start_indicator is true and pointer 0, there is no
discontinuity check. Therefore there could be a previous section not complete
that need to be cleared.
https://bugzilla.gnome.org/show_bug.cgi?id=758010
given a NULL-terminated string, s.
s[i] = '\0';
i++;
does not guarentee that s[i] is NULL terminated and thus string operations
could read off the end of the array.
https://bugzilla.gnome.org/show_bug.cgi?id=758039
gst_gl_shader_detach_unlocked already removes the list entry so attempting to
use the element to iterate to the next stage could read invalid data.
Based on patch by Vineeth TM <vineeth.tm@samsung.com>
https://bugzilla.gnome.org/show_bug.cgi?id=758039