This is potentially racy (in the unlikely scenario that we get two
first-time calls to gst_player_error_quark() at the same time). This
should not impact anything in terms of performance since it's only on
the error path.
The call itself could just be inlined by making GST_PLAYER_ERROR be
defined to the g_quark_from_static_string() call, but this feels ugly
from an API perspective.
If a sub class of GstGLContext does not create a group
then it currently crashes:
0 g_atomic_int_get (&share->refcount)
1 _context_share_group_is_shared (context->priv->sharegroup)
2 gst_gl_context_is_shared
3 _default_set_sync_gl
https://bugzilla.gnome.org/show_bug.cgi?id=774518
This is a subblass of VideoFilter but yet does not use any of it's
features. This also fixes issue in case the incoming images have custom
strides as the VideoMeta is no longer ignored.
https://bugzilla.gnome.org/show_bug.cgi?id=775288
When a MSS server hosts a live stream the fragments listed in the
manifest usually don't have accurate timestamps and duration, except
for the first fragment, which additionally stores timing information
for the few upcoming fragments. In this scenario it is useless to
periodically fetch and update the manifest and the fragments list can
be incrementally built by parsing the first/current fragment.
https://bugzilla.gnome.org/show_bug.cgi?id=755036
Calling g_main_context_push_thread and then g_main_context_invoke()
(used by gst_gl_window_send_message_async()) in the same thread will
cause the invoked function to run immediately instead of being delayed.
This had implications for the creation of the OpenGL context not waiting
until the main loop had completely started up and as a result would
sometimes deadlock in short create/destroy scenarios.
https://bugzilla.gnome.org/show_bug.cgi?id=775171
626bcccff9 removed some locks that
allowed the main loop quit to occur before the context was fully
created.
2776cef25d attempted to readd them but
missed the scop of the quit() call.
Also remove the use of g_thread_join() as that's not safe to use when
it's possible to lose the last reference from the GL thread.
https://bugzilla.gnome.org/show_bug.cgi?id=775171
The smallest section ever needs to be at least 3 bytes (i.e. just the short
header).
Non-short headers need to be at least 11 bytes long (3 for the minimum header,
5 for the non-short header, and 4 for the CRC).
https://bugzilla.gnome.org/show_bug.cgi?id=775048
It's been removed and thus compiling anything against GstGLMemoryEGL
would error with:
In file included from gstomxvideodec.c:41:0:
usr/include/gstreamer-1.0/gst/gl/egl/gstglmemoryegl.h:32:41: fatal error: gst/gl/egl/gstglcontext_egl.h: No such file or directory
#include <gst/gl/egl/gstglcontext_egl.h>
^
https://bugzilla.gnome.org/show_bug.cgi?id=774886
Otherwise, when the application reuses the same UIView, we were getting
draw notifications on the previous view/layer's which weren't valid anymore
and were referencing pointers that had been freed.
https://bugzilla.gnome.org/show_bug.cgi?id=753003
This changes the failure case to require a consecutive number of
failures rather than being spread out over the entire stream.
Fixes the case where fetching the manifest was intermittent.
https://bugzilla.gnome.org/show_bug.cgi?id=774177
For formats that need to update the manifest to know about new
fragments as they're being written by the server would never receive an
updated fragment list after a seek event
https://bugzilla.gnome.org/show_bug.cgi?id=774177
- xcb is supposedly thread-safe!
videotestsrc ! glimagesink now doesn't spuriously result in a
'call XInitThreads()' error however if anybody else is using X11,
then XInitThreads() still needs to be called and multiple glimagesink's
still need XInitThreads().
Everything still takes libX11 handles as they are compatible with the xcb
variants. Unfortunately we cannot move fully over to xcb due to GLX being
entirely based on Xlib. It's also impossible to transform a xcb_connection
to a Display which means we require X11 handles.
The spec allows the core/compatibility profiles to be used
with #version 150.
Also tighten up the tests to check for default profiles being chosen
correctly.
The change to use GST_EXPORT for symbols under Windows requires
GST_EXPORTS for internal use, and that is also needed under Autotools.
The same thing is done for gstreamer-1.0.dll in -core.
The calling convention may be deprecated, but we still need it for
OpenGL. The build issue was caused by an incorrect syntax being used for
the WINAPI (__stdcall) prototype in function pointers which was accepted
by GCC but is rejected by MSVC.
With MSVC, this gives the following warning:
warning C4305: 'function': truncation from 'double' to 'gfloat'
Apparently, MSVC does not figure out what type to use for constants
based on the assignment. This warning is very spammy, so let's try to
fix it.
At minimum, we only need to glFlush() if we are in a shared GL context
environment. Move the glFinish() to when the actual wait is requested
which may be never. Improves the throughput on older GL systems without
GL3/GLES3 and/or fence sync objects.
In order to calculate the *actual* bitrate for downloading a fragment
we need to take into account the time since we requested the fragment.
Without this, the bitrate calculations (previously reported by queue2)
would be biased since they wouldn't take into account the request latency
(that is the time between the moment we request a specific URI and the
moment we receive the first byte of that request).
Such examples were it would be biased would be high-bandwith but high-latency
networks. If you download 5MB in 500ms, but it takes 200ms to get the first
byte, queue2 would report 80Mbit/s (5Mb in 500ms) , but taking the request
into account it is only 57Mbit/s (5Mb in 700ms).
While this would not cause too much issues if the above fragment represented
a much longer duration (5s of content), it would cause issues with short
ones (say 1s, or when doing keyframe-only requests which are even shorter)
where the code would expect to be able to download up to 80Mbit/s ... whereas
if we take the request time into account it's much lower (and we would
therefore end up doing late requests).
Also calculate the request latency for debugging purposes and further
usage (it could allow us to figure out the maximum request rate for
example).
https://bugzilla.gnome.org/show_bug.cgi?id=733959https://bugzilla.gnome.org/show_bug.cgi?id=772330
Using g_thread_join() in _finalize() handlers may result in a deadlock
joining the current thread when the last reference is held by a signal
handler.
e.g.:
error 'Resource deadlock avoided' during 'pthread_join (pt->system_thread, NULL)'
The backtrace looks like this:
[...]
g_thread_join ()
gst_gl_window_finalize ()
gst_gl_window_x11_finalize ()
g_object_unref ()
g_value_unset ()
g_signal_emit_valist ()
g_signal_emit ()
gst_gl_window_send_mouse_event ()
gst_gl_window_mouse_event_cb ()
g_main_dispatch ()
[..]
g_main_loop_run ()
gst_gl_window_navigation_thread ()
g_thread_proxy ()
start_thread ()
clone ()
We cannot set the x, y coordinate of the video frame at the dispmanx at
this point. We need to teach dispmanx backend to understand about
set_render_rectangle API to draw a video with other UI.
This patch keeps the current behavior which places video frame at the
center of the display if there is no set_render_rectangle call to the
dispmanx window.
https://bugzilla.gnome.org/show_bug.cgi?id=766018
e.g. passing with_gl_api=gles2 would still build the glx code but not be
linking against the libGL library which is where the glX* functions are
located and would result in a linker error.
Solved by checking for the libGL library if either opengl or glx may be
needed and then disabling the corresponding deps as requested.
Allowing us to tell GstPad why we are failing an event, which might
be because we are 'flushing' even if the sinkpad is not in flush state
at that point.
The tests were broken since 91fea30, which changed glupload to return
GST_GL_UPLOAD_RECONFIGURE if the texture target in the input buffers doesn't
match the texture-target configured in the output caps.
This commit fixes that and adds more checks for the new behaviour.
Now when used with video/x-raw as input, the GLMemoryUpload method checks for
->tex_target in input GLMemory(es) and sets the output texture-target
accordingly.
Fixes video corruption with a pipeline like avfvideosrc ! video/x-raw !
glimagesink where on macos avfvideosrc pushes RECTANGLE textures but glupload
was configuring texture-target=2D as output.
The headers passed as parametter are relative to the build dir
basically "../subproject/gst-plugins-bad/gst-libs/gst/mpegts/XXX.h"
but that does not match what is needed at build time when building as
subproject, also we always add current dir as include_dir so we are
safe including directly.
And link mpegtsdemux against the 'math' library as it is needed.
Don't set the chosen texture-target into the wrong structure.
The input caps may not be writable, and in any case - the
intention was to configure the othercaps. Also, remove an
extra unref - the othercaps ref is consumed by
gst_caps_make_writable already.
And scale the bitrate with the absolute rate (if it's bigger than 1.0) to get
to the real bitrate due to faster playback.
This allowed in my tests to play a stream with 10x speed without buffering as
the lowest bitrate is chosen, instead of staying/selecting the highest bitrate
and then buffering all the time.
It was previously disabled for not very well specified reasons, which seem to
be not valid anymore nowadays.
Prevent the manifest update loop from looping endlessly
after a seek event, by clearing the variable that tells
the task function not to immediately exit.
The new streams should not be exposed until all streams are done with the
current fragment. The old code is incorrect and actually only checked the
current stream. Fix this by properly checking all streams.
Also, ignore the current stream. The code is only reached when the current
stream finished downloading and since
07f49f15b1 ("adaptivedemux: On EOS, handle it
before waking download loop") download_finished is set after
gst_adaptive_demux_stream_advance_fragment_unlocked() is called.
Without this HLS playback with multiple streams is broken, because the new
streams are never exposed.
https://bugzilla.gnome.org/show_bug.cgi?id=770075
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Matej Knopp <matej.knopp@gmail.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
Multiple threads may be accessing the wayland fd at the same time which
requires the use of special wayland API to deal with to ensure nobody
will steal reads and cause a stall for anyone else.
This allows to gradually download part of a fragment when the final size is
not known and only a part of it should be downloaded. For example when only
the moof should be parsed and/or a single keyframe should be downloaded.
https://bugzilla.gnome.org/show_bug.cgi?id=741104
When connect to qmlglsrc, x11 event loop will be replace by qt event loop
which will cause the window cannot receive event from xserver, such as resize
https://bugzilla.gnome.org/show_bug.cgi?id=768160
Makes infinitely more sense and implementation were expecting that behaviour
anyway and would enter a resize, draw, resize, draw, ... cycle instead of only
resizing once.
This helps catch those 404 server errors in live streams when
seeking to the very beginning, as the server will handle a
request with some delay, which can cause it to drop the fragment
before sending it.
https://bugzilla.gnome.org/show_bug.cgi?id=753751
To allow adaptivedemux to make retry decisions, it needs to know what
sort of HTTP error has occurred. For example, the retry logic for a
410 error is different from a 504 error.
https://bugzilla.gnome.org/show_bug.cgi?id=753751
Some derived classes (at least dashdemux) expose a seeking range
based on wall clock. This means that a subsequent seek to the start
of this range will be before the allowed range.
To solve this, seeks without the ACCURATE flag are allowed to seek
before the start for live streams, in which case the segment is
shifted to start at the start of the new seek range. If there is
an end position, is is shifted too, to keep the duration constant.
https://bugzilla.gnome.org/show_bug.cgi?id=753751
There's no need for the jump to an extra thread in most cases, especially
when relying solely on a shader to render. We can use the provided
render_to_target() functions to simplify filter writing.
Facilities are given to create fbo's and attach GL memory (renderbuffers
or textures). It also keeps track of the renderable size for use with
effective use with glViewport().
Make state changes of internal elements more reliable by locking
their state, and ensuring that they aren't blocked pushing data
downstream before trying to set their state.
Add a boolean to avoid starting tasks when the main
thread is busy trying to shut the element down.
Try harder to make switching pads work better by
making sure concurrent downloads are finished before exposing
a new set of pads.
Release the manifest lock when signalling no-more-pads, as
that can call back into adaptivedemux again
If other stream fragments are still downloading but new streams
have been scheduled, don't expose them yet - wait until the last
one finishes. Otherwise, we can cancel a partially downloaded
auxilliary stream and cause a gap.
Drop the manifest lock when performing actions that might
call back into adaptivedemux and trigger deadlocks, such
as adding/removing pads or sending in-band events (EOS).
Unlock the manifest lock when changing the child bin state to
NULL, as it might call back to acquire the manifest lock when
shutting down pads.
Drop the manifest lock while pushing events.
In the case of KEY_UNIT and TRICKMODE_KEY_UNITS seeks, we want to
"snap" to the closest fragment.
Without this, we end up pushing out a segment which does not match
the first fragment timestamp being pushed out, resulting in one or
more buffers being eventually dropped because they are out of segment.
Calling glUniformMatrix before the shader is bound is invalid and
would result in errors like:
GL_INVALID_OPERATION in glUniformMatrix(program not linked)
Move glUniformMatrix() to after the gst_gl_shader_use() call.
Rather than assuming something. e.g. zerocopy on iOS with GLES3 requires
the use of Luminance/Luminance Alpha formats and does not work with
Red/RG textures.
Take the used texture type from the memory instead.
Fixes conversion from multi-planar YUV formats with two components per plane
(NV12, NV21, YUY2, UYVY, GRAY16_*, etc) with Luminance Alpha input textures.
This is also needed for zerocopy decoding on iOS with GLES 3.x.
The intention was to assert if both maj and min were NULL (as there would be no
point calling the function). Instead if either maj or min were NULL, the assert
would occur.
Fix that.
Newer devices require using a different GLSL extension for accessing
external-oes textures in a shader using the texture() functions.
While the GL_OES_EGL_image_external_essl3 should supposedly be supported
on a any GLES3 android device, the extension was defined after a lot of the
older drivers were built so they will not know about it. Thus there are two
possible interpretations of which of texture[2D]() should be supported for
external-oes textures. Strict adherence to the GL_OES_EGL_image_external
extension spec which uses texture2D() or following GLES3's pattern, also
allowing texture() as a function for accessing external-oes textures
This adds another mangling pass to convert
#extension GL_OES_EGL_image_external : ...
into
#extension GL_OES_EGL_image_external_essl3 : ...
on GLES3 and when the GL_OES_EGL_image_external_essl3 extension is supported.
Only uses texture() when the GLES3 and the GL_OES_EGL_image_external_essl3
extension is supported for external-oes textures.
Uses GLES2 + texture2D() + GL_OES_EGL_image_external in all other external-oes
cases.
https://bugzilla.gnome.org/show_bug.cgi?id=766993
Otherwise we will leak GstGLContext's when adding the same context more than
once.
Fixes a regression caused by 5f9d10f603 in the
gstglcontext unit test that failed with:
Assertion 'tmp == NULL' failed
Until now we would start the task when the pad is activated. Part of the
activiation concist of testing if the pipeline is live or not.
Unfortunatly, this is often too soon, as it's likely that the pad get
activated before it is fully linked in dynamic pipeline.
Instead, start the task when the first serialized event arrive. This is
a safe moment as we know that the upstream chain is complete and just
like the pad activation, the pads are locked, hence cannot change.
https://bugzilla.gnome.org/show_bug.cgi?id=757548
If the input buffer is after the end of the output buffer, then waiting
for more data won't help. We will never get an input buffer for this point.
This fixes compositing of streams from rtspsrc.
https://bugzilla.gnome.org/show_bug.cgi?id=766422
Provide a function to get the affine matrix in the meta in terms of NDC
coordinates and use as a standard opengl matrix.
Also advertise support for the affine transformation meta in the allocation
query.
The URI downloader is creating the source element with
gst_element_factory_make() that returns a floating reference that nobody
is consuming. This is causing problems in WebKit, where the smart
pointers used to take references of the source elment get confused and
end up consuming the floating reference and then releasing the element,
which usually crashes because the URI downloader still tries to use its
src element. See https://bugs.webkit.org/show_bug.cgi?id=144040.
This commit adds two helper functions to ensure and destroy the source
element, to make the code simpler and less error prone. The ensure
method takes care of checking if we can reuse the existing one or we
need to create a new one, taking always its ownership. The destroy
method simply avoids duplicated code to set the source to NULL state and
then unref it.
https://bugzilla.gnome.org/show_bug.cgi?id=766053
The gst_adaptive_demux_wait_until() function can be woken up either
by its end_time being reached, or from other threads that want to
interrupt the waiting thread.
If the thread is interrupted, it needs to cancel its async clock callback
by unscheduling the clock callback. However, the callback task might already
have been activated, but is waiting for the mutex to become available. In this
case, the call to unschedule does not stop the callback from executing.
The solution to this second issue is to use a reference counted object that
is decremented by both the gst_adaptive_demux_wait_until() function and the
call to gst_clock_id_wait_async (). In this way, the GstAdaptiveDemuxTimer
object is only deleted when both the gst_adaptive_demux_wait_until() function
and the async callback are finished with the object.
https://bugzilla.gnome.org/show_bug.cgi?id=765728
Because current GstEGLImageMemory does not inherit GstGLMemory, GLUpload
allocates additional GLMemory and upload the decoded contents from the decoder
which uses EGLImage (e.g. gst-omx in RPi).
This work adds GstGLMemoryEGL to avoid this overhead. Decoders allocate
GstGLMemoryEGL and decode its contents to the EGLImage of GstGLMemoryEGL. And
GLUpload uses this memory without allocation of additional textures and blit
operations.
[Matthew Waters]: gst-indent the sources and fix a critical retreiving the egl
display from the memory.
https://bugzilla.gnome.org/show_bug.cgi?id=760916
Allows creating wrapped memories with GstGLAllocationParams.
The wrapped pointers will be set in the parameters before being passed
to the memory allocation function.
Some platforms provide an old version of GLES2/gl2.h and GLES2/gl2ext.h that
will fail when including GLES3/gl3.h due to missing typedef's.
Seen on the RPi.
Adds a new function to mpegts lib to create a iso639 language
descriptor from a language and use it in mpegtsmux to add
a language descriptor to audio streams that have a language set.
https://bugzilla.gnome.org/show_bug.cgi?id=763647
This fixes a race where we check if there is a clock, then it get
removed and we endup calling gst_clock_new_single_shot_id() with a NULL
pointer instead of a valid clock and also calling gst_object_unref()
with a NULL pointer later.
https://bugzilla.gnome.org/show_bug.cgi?id=757548
There are several places in adaptivedemux where it waits for
time to pass, for example to wait until it should next download
a fragment. The problem with this approach is that it means that
unit tests are forced to execute in realtime.
This commit replaces the use of g_cond_wait_until() with single
shot GstClockID that signals the condition variable. Under normal
usage, this behaves exactly as before. A unit test can replace the
system clock with a GstTestClock, allowing the test to control the
timing in adaptivedemux.
https://bugzilla.gnome.org/show_bug.cgi?id=762147
A realtime clock is used in many places, such as deciding which
fragment to select at start up and deciding how long to sleep
before a fragment becomes available. For example dashdemux needs
sample the client's estimate of UTC when selecting where to start
in a live DASH stream.
The problem with dashdemux calculating the client's idea of UTC is
that it makes it difficult to create unit tests, because the passage
of time is a factor in the test.
This commit changes dashdemux and adaptivedemux to use the
GstSystemClock, so that a unit test can replace the system clock when
it needs to be able to control the clock.
This commit makes no change to the behaviour under normal usage, as
GstSystemClock is based upon the system time.
https://bugzilla.gnome.org/show_bug.cgi?id=762147
Don't create many short lived locks/conds in gst_gl_window_send_message. This is
a micro optimization to save a bunch of pthread_* calls which are expensive on
OSX/iOS and possibly other platforms.
We weren't using the result of find_best_format at all.
Also, move the find_best_format usage to the default update_caps() to make
sure that it is also overridable.
https://bugzilla.gnome.org/show_bug.cgi?id=764363
The subsampling_x, subsampling_y, bit_depth, color_space and color_range
fileds are moved from GstVp9FrameHdr to the global GstVp9Parser structure.
These fields are only present in keyframe or intra-only frame, no need to
duplicate them for inter-frames. This is an ABI change.
https://bugzilla.gnome.org/show_bug.cgi?id=764370
while calling eglCreateImage without a GL context current in the executing
thread works on the RPi, some other implementations will return errors.
Marshall the eglCreateImage to the GL thread to appease these implementations.
There are numerous slight differences required between Desktop GL and GLES3 for
multiple render targets.
1. gl_FragData doesn't exist at all and one is required to use
'layout (location = ?) out ...' instead.
2. gl_FragColor doesn't exist, same as 1
3. texture2D() has been deprecated
Fortunately most of these have been taken care of with GL3 and the shader
mangling already exists so just expand the conditions they are used in. The
gl_FragData issue requires a new mangle pass though. We also use this new
pass on desktop GL for consistency.
By default, reading GL_RED or GL_RG us unsupported by glReadPixels unless
exposed through GL_COLOR_READ_IMPLEMENTATION_FORMAT/TYPE. This allows
downloading multiple-planar video frames where possible.
There are some cases where it's needed for binding in/out variables in shaders.
e.g. glsl 150 (gl 3.2) doesn't support the 'layout (location = ?)' specifiers in
the shader source so we have to bind them ourselves.
Happens e.g. if a RECONFIGURE event is sent from downstream while we're
switching pads at this very moment. The old pad is gone and the stream has a
new pad.
https://bugzilla.gnome.org/show_bug.cgi?id=764404
Previously, while allocating the pad number for a new pad, aggregator was
maintaining an interesting relationship between the pad count and the pad
number.
If you requested a sink pad called "sink_6", padcount (which is badly named and
actually means number-of-pads-minus-one) would be set to 6. Which means that if
you then requested a sink pad called "sink_0", it would be assigned the name
"sink_6" again, which fails the non-uniqueness test inside gstelement.c.
This can be fixed by instead setting padcount to be 7 in that case, but this
breaks manual management of pad names by the application since it then becomes
impossible to request a pad called "sink_2". Instead, we fix this by always
directly using the requested name as the sink pad name. Uniqueness of the pad
name is tested separately inside gstreamer core. If no name is requested, we use
the next available pad number.
Note that this is important since the sinkpad numbering in aggregator is not
meaningless. Videoaggregator uses it to decide the Z-order of video frames.
Without the GST_GL_API_GLES2 bit set, we will not even attempt to look
for the function pointers in the core library and will fallback to
glFlush/glFinish.
If the user uploads their own texture without setting the unpack length, then
then the result will have the appearance of stride mismanagement due to
an incorrect row length.
If we are given caps with extra features (like the overlay composition
features), we can only deal with that when we are in passthrough mode.
Previously we were bailing entirely and not allowing passthrough filter elements
with things like textoverlay.
Fixes the following pipeline (assuming glfilter supports passthrough):
gl ! textoverlay ! glfilter ! ... ! glimagesinkelement
https://bugzilla.gnome.org/show_bug.cgi?id=763756
When transforming, xplode it out into the necessary caps features both
with and without the passthough features.
Fixes negotiation in the following class of pipelines:
gl ! textoverlay ! glupload ! glimagesinkelement
https://bugzilla.gnome.org/show_bug.cgi?id=763756
GL 1.4 (with GL_ARB_shader_objects) doesn't have glIsProgram or glIsShader
equivalents. As they are simply assertions, skip them when there isn't a
valid function pointer.
This is a regression from since mpegvideoparser was switched to
use the codecparsing library.
The problem is that the high bit of the profile_and_level is used
to specify non-hierarchical profiles and levels. Unfortunately we
were discarding that information.
Expose that escape bit, and use it in the element
https://bugzilla.gnome.org/show_bug.cgi?id=763220
With e38af23044 returning the correct contexts,
gst_gl_display_add_context() was susceptible to causing infinte loops when
adding the same GstGLContext more than once. Fix and add a test for
gst_gl_display_add_context().
Fixes glvideomixer gst-validate tests.
Execute GL calls without marshalling them to the context thread. In the cocoa
and eagl backends calling gst_gl_context_activate is cheap and therefore calling
it on the current thread and serializing GL calls with a per-context lock is
more efficient (faster and has less overhead) than marshalling everything to the
context thread.
This optimization cuts a large overhead in g_poll (continuously waking up the
context thread) and in g_mutex_*/g_cond_* (waiting for results from the context
thread).
When requesting a glcontext (regardless of thread), the result was correct.
However, when requesting current glcontext on a specific thread, it could
come up with a glcontext active on another thread.
https://bugzilla.gnome.org/show_bug.cgi?id=763168
Maxsize is initialized once and should never change. Allocating data
should have no impact on the selected max size for this memory. This
causing memory map failure as the maxsize would become smaller then
size. This happened when using direct rendering in avviddec on GL that
does not support PBO transfer.
https://bugzilla.gnome.org/show_bug.cgi?id=763045
When the start_type is GST_SEEK_TYPE_NONE for a forward seek
(or stop_type for a reverse) is not set on a snap seeking operation,
the element should use the current position and then snap as requested.
Also fixes uninitialized variable complaint by clang about
'ts' variable.
Intended for use with wrapped contexts that are created shared with gst's
gl contexts in order to manage the internal sharegroup state correctly.
e.g. with caopengllayer (which is used in glimagesink and caopengllayersink
on OS X), we create a CGL context from the gst context and the sharing state
was not being correctly set on either GL context and gst_gl_context_is_shared()
was always returning FALSE.
With 11fb4fff80 only flushing with multiple
shared contexts, the required flush was not occuring causing screen
corruption or stuttering.
Note: this didn't affect GST_GL_API=opengl pipelines
https://bugzilla.gnome.org/show_bug.cgi?id=762620
When caps are already negotiated it should be possible to
select formats other than the one that was negotiated. If downstream
allows alpha video caps and it has already negotiated to a non-alpha
format, caps queries should still return the alpha caps as a possible
format as caps renegotiation can happen.
Includes tests (for compositor) to check that caps queries done after
a caps has been negotiated returns complete results
https://bugzilla.gnome.org/show_bug.cgi?id=757610
Expose the expose() and set_render_rectangle() methods. These are useful for
proper functioning of the video overlay in various situations and toolkits.
H.265 7.4.7.1 says:
> When slice_deblocking_filter_disabled_flag is not present, it is
> inferred to be equal to pps_deblocking_filter_disabled_flag.
https://bugzilla.gnome.org/show_bug.cgi?id=762351
Usually gl debug is initialized in gst_gl_context_create_thread.
But this function is not used when using the GstGLContextGPUProcess
from ChromiumGStreamerBackend.
Received signal 11 SEGV_MAPERR 000000000000
gst_debug_category_get_threshold
gst_gl_insert_debug_marker
gst_gl_base_filter_gl_start
CC libgstgl_x11_la-gstglcontext_glx.lo
In file included from gstglcontext_glx.c:39:0:
../utils/opengl_versions.h:52:43: error: ‘gles2_versions’ defined but not used [-Werror=unused-const-variable]
static const struct { int major, minor; } gles2_versions[] = {
^~~~~~~~~~~~~~
Invoke the callback right away when called on the context thread. Removes
overhead when nesting libgstgl calls (for example when working with the sync
meta).
CPU waits are more expensive and are only required if the CPU is ever going to
access the data. GPU waits perform inter-context synchronisation and are cheaper
as they don't require CPU intervention.
Adaptive demuxers need to start downloading from specific positions
(fragments) for every stream, this means that all streams can snap-seek
to a different position when requested. Snap seeking in this case will
be done in 2 steps:
1) do the snap seeking on the pad that received the seek event and
get the final position
2) use this position to do a regular seek on the other streams to
make sure they all start from the same position
More arguments were added to the stream_seek function, allowing better control
of how seeking is done. Knowing the flags and the playback direction allows
subclasses to handle snap-seeking.
And also adds a new return parameter to inform of the final
selected seeking position that is used to align the other streams.
https://bugzilla.gnome.org/show_bug.cgi?id=759158
All uses of query->context in gstglquery.c assume it exists. We can assume
this as well before unrefing it. Furthermore, gst_object_unref() will just
silently return if it ever were to not exist.
Improves the responsiveness of the pipeline when resources are close/above the
limitations of the hardware.
Any subclass that wishes not to enable qos can do so themselves.
https://bugzilla.gnome.org/show_bug.cgi?id=761519
This reverts commit 96b9666d59.
This reverts commit d11385d167.
This breaks the texture sharing with the applemedia elements as
CVOpenGLESTextureCache seems to have an arbitrary restriction on GLES2 only.
We may need them to transform into a different set of formats.
Fixes YUV->YUV with two glcolorconverts, e.g:
format=I420 ! glcolorconvert ! glcolorconvert ! format=NV12
1.0 / width does not offset by one pixel in rectangular textures (which use
unnormalized coordinates).
Provide the actual pixel offset as a uniform to the shader.
1. Correctly describe what we can caps we can transform to/from.
i.e. no YUV->YUV or GRAY->YUV or YUV->GRAY (except for passthrough).
2. Prefer similar formats and ignore incompatible formats on fixation.
Useful when gst_gl_window.c::gst_gl_window_new is not used.
This is the case when using a custom GstGLWindow.
(ex: GstGLWindowGPUProcess from Chromium)
Allows the subclass to completely override the chosen src caps.
This is needed as videoaggregator generally has no idea exactly
what operation is being performed.
- Adds a fixate_caps vfunc for fixation
- Merges gst_video_aggregator_update_converters() into
gst_videoaggregator_update_src_caps() as we need some of its info
for proper caps handling.
- Pass the downstream caps to the update_caps vfunc
https://bugzilla.gnome.org/show_bug.cgi?id=756207
The function gst_adaptive_demux_stream_update_source() function creates
a new GstPad called internal_pad. This pad is not freed when releasing
the stream.
The solution is to set GST_PAD_FLAG_NEED_PARENT so that the chain
functions do not get called when the pad has no parent and then
remove the parent in the gst_adaptive_demux_stream_free() function. This
causes the refcount of the pad to be set to zero.
https://bugzilla.gnome.org/show_bug.cgi?id=760982
It's useful enough already to be used in other elements for audio aggregation,
let's give people the opportunity to use it and give it some API testing.
https://bugzilla.gnome.org/show_bug.cgi?id=760733
Handling the ghostpad and its internal pad was causing more issues
than helping because of their coupled activation/deactivation
actions.
As we have to install custom chain,event and query functions it is
better to use a floating sink pad internally in the demuxer and just
use those pad functions to push through a standard pad in the demuxer
https://bugzilla.gnome.org/show_bug.cgi?id=757951
The messages are stored by gst_gl_async_debug_store_log_msg() and output later
by a corresponding store(), output() or an unset()/free().
Some wrapper macros are provided to avoid callers explicitly using __FILE__,
GST_FUNCTION and __LINE__
This makes a pipeline like:
... ! video/x-raw(memory:GLMemory),format=UYVY ! glcolorconvert !
video/x-raw(memory:GLMemory),format={UYVY, NV12} ! ...
passthrough instead of converting UYVY => NV12. The conversion would happen
before this change since the element (and basetransform) transform the src caps
to format={NV12, UYVY} (since NV12 comes first in the glcolorconvert:src
template) and then the default caps fixate func would fixate to NV12. Blah.
Also there's no need to intersect against the template caps in ::transform_caps
since basetransform does that right after calling the vfunc.
The optimistic download_transfer was not setting the required flag to not
perform glReadPixels on subsequent map (READ). resulting in glReadPixels
happening twice.
Fixed adaptivedemux seeking without flushing that just wants
to update stop position. This required protecting the segment
variables with a new mutex so that the seeking thread and the
download threads could safely manipulate the segment and
events related to it.
This contention is only locked/unlocked when starting a new
download, when the first fragment of a segment is received and
when seeking so, hopefully, it won't damage performance.
Some operations are unnecessary when running with only a single GL
context.
e.g. glFlush when setting a fence object as the flush happens on wait.
API: gst_gl_context_is_shared
Avoids downloading and pushing a full segment just to get 1 nanosecond
of data. This happens frequently when seeking is done with flags
that adjust to boundaries or when the start is aligned with segment
starts. The later is common when segment durations is a multiple of
a second.
For reverse, set position to segment.stop when starting and also
don't set the position to fragment end timestamp when it finishes,
just leave it at the fragment start.
1. Various elements/base classes only perform a subset check on accept-caps
2. Some GL elements have texture-target in their pad template
3. When checking subsets, only the caps to check are allowed to contain extra
fields. If the 'template' caps have extra fields, the subset fails.
Thus without texture-target on the caps, various accept-caps implementations
were failing.
Also, add some convenience functions for setting and retrieving
texture targets to/from GValue.
https://bugzilla.gnome.org/show_bug.cgi?id=759860
In very few cases the simple version was actually needed and having the
parameters hidden by a _full() version caused application that actually needed
it to not use it.
GL 2.1 only supports pbo upload.
The wrapped data pointer was only being set on the pbo memory and on the
glmemory so when a download was requested (in GL 2.1), glmemory was
allocating a new data pointer and thus not returning the wrapped data.
Exposing the navigation thread's main context, GSourceFuncs and structs called
key_event and mouse_event is exposing a bit too much of the internals. Let's
just go with two functions to asynchronously send navigation events on the
window with the same API as the synchronous ones.
This upload method detect and optimize uploads of DMABuf memory. This is
done by creating and caching EGLImages wrapper around DMABuf. The
EGLImages are then binded to a texture which get converter using
standard shader.
Example pipeline:
GST_GL_PLATFORM=egl \
gst-launch-1.0 v4l2src device=/dev/video1 io-mode=4 ! \
video/x-raw,format=NV12 ! glimagesink
https://bugzilla.gnome.org/show_bug.cgi?id=743345
Maps GstVideoFormats to suitable DRM fourccs which work with
glcolorconvert, using gst_gl_memory_alloc(). We require mostly
only 4 formats to be supported by the driver. We require DRM
equivalent to RGB16, RGBA, R8 and RG88. This way it's compatible with
DesktopGL, since GL_TEXTURE_2D is used and limit driver requirements.
With this we can virtually support all formats the glcolorconvert
supports.
https://bugzilla.gnome.org/show_bug.cgi?id=743345
Add gst_gl_memory_allocator_get_default to get the default allocator based on
the opengl version. Allows us to stop hardcoding the PBO allocator which isn't
supported on gles2.
Fixes GL upload on iOS9 among other things.
e.g when wrapping a data pointer we don't want to map/unmap off the end of
pointer with the alignment bytes.
Instead track that information separately as maxsize is used for mapping by
GstMemory and thus represents a size without any alignment padding bytes.
Requires the usage of GstGLVideoAllocationParams however any user can set their
own parameters along with an allocator which will be used to allocate the
correct memory type.
- Create GstGLVideoAllocationParams which is a GstGLAllocationParams subclass.
- Make it possible to allocate glmemory objects directly if no frills are
needed.
This is made possible by a subclassable GstGLAllocationParams that holds
the allocation parameters
Every allocation would now go through gst_gl_base_memory_alloc with the
allocation parameters now being specified in a single struct to allow
extension by different allocators.
The imported memory has already been allocated, passing allocation
parameters with alignment confuses the memory which endup with a
size different from maxsize and lead to overrun when the memory
is being copied.
GCC automatically disable redundance warnings for system headers. As
soon as we start using a non-system installed mesa, we would start
having issues. The test for both wasn't setting any flags, so it would
work but then fail at runtime.
This is being fixed by disabling in the code (where needed only) that
GCC warning. The test is also fixed to avoid the false positive we had.
The base class is useful for having multiple backing memory types other
than the default. e.g. IOSurface, EGLImage, dmabuf?
The PBO transfer logic is now inside GstGLMemoryPBO which uses GstGLBuffer
to manage the PBO memory.
This also moves the format utility functions into their own file.
Heavily based on GstGLBaseBuffer that is a subclass of GstGLBaseMemory.
Provides GPU and CPU accessible GL buffer objects by GL handle or by
sysmem data pointer.
It handles the following
- GstAllocationParams -> gst_memory_init transformation
- Makes sure that map/unmap/create/destroy happen on the GL thread with
a GL context current.
- Holds a possible sysmem accessible data pointer with alignment.
- Holds the need upload/download transfer state
Rectangle textures don't use normalized coordinates so subsampling needs to be
factored in explicitly.
Fixes YUV => RGB conversion for rectangle textures.
To use GLMemory and EGLImage allocators, one need to know the
libgstgl API. This is only expected if the associated caps features
have been negotiated. Generic element that otherwise receive those
allocators may fail, resulting in broken pieline. We don't want to
force all generic element to check if the allocator is a custom
allocator or a normal allocator (which implement the _alloc method).
https://bugzilla.gnome.org/show_bug.cgi?id=758877
gstglsyncmeta.c -fPIC -DPIC -o .libs/libgstgl_1.0_la-gstglsyncmeta.o
gstglsyncmeta.c: In function 'gst_buffer_add_gl_sync_meta':
gstglsyncmeta.c:131:1: error: control reaches end of non-void function [-Werror=return-type]
}
^
there could be other ways/requirements for synchronising two GPU command
streams (whether GL or platform specific).
e.g. glfencesync/eglwaitnative/cond/etc
This no longer does anything, and it was marked as CONSTRUCT_ONLY
which means someone would really have to go out of their way to
be able to set this, which would only be done in very custom
scenarios, if ever, and those will likely target a specific
version of GStreamer then, so probably not much point keeping
it deprecated for a while before removing it.
given a NULL-terminated string, s.
s[i] = '\0';
i++;
does not guarentee that s[i] is NULL terminated and thus string operations
could read off the end of the array.
https://bugzilla.gnome.org/show_bug.cgi?id=758039
gst_gl_shader_detach_unlocked already removes the list entry so attempting to
use the element to iterate to the next stage could read invalid data.
Based on patch by Vineeth TM <vineeth.tm@samsung.com>
https://bugzilla.gnome.org/show_bug.cgi?id=758039
Some drivers don't provide the compatibility definition and we need to provide
our own 'out vec4' variable to put the results of the fragment shader into.
https://bugzilla.gnome.org/show_bug.cgi?id=757938
Rectangular textures are unavailable in unextended
GLES2 #version 100 shaders.
Fixes
texture-target=rectangle ! glcolorconvert ! texture-target=2D
There's a couple of differences between GL3 and GLES2/GL
- varying -> in or out depending on the stage (vertex/fragment)
- attribute -> in
- filtered texture access is a single function, texture()
Bitrate estimation is now handled through a queue2 element added after
the source element used to download fragments.
Original hlsdemux patch by Duncan Palmer <dpalmer@digisoft.tv>
https://bugzilla.gnome.org/show_bug.cgi?id=733959
This code will never be called as max>=min in all cases. If the upstream
latency query returned min>max, the function already returned and all
values that are added to those have max>= min.
Make gst_gl_context_gen_shader/_compile_shader assume GST_GLSL_PROFILE_ES |
GST_GLSL_PROFILE_COMPATIBILITY as the profile. Without this, the shader compiler
doesn't inject the #version tag resulting in a compilation error on Mountain
Lion.
This is a workaround for old code using gst_gl_context_gen_shader. New code
should use the gst_glsl_stage_* API directly which allows the caller to
explicitly specify version/profile.
The ret variable may be uninitialized and so its contents were undefined and
the results were erratic (failing with glvideomixer, succeeding in other cases)
P.S. No idea why gcc/clang et al never picked up on this like they normally do
(probably due to some optimisation pass figuring out it's only set once...)
They are already defined in the forward decleration header and defining them
more than once will give an error with OSX's clang about typedef redefinition
being a C11 feature.
This was chosen over relying solely on the caps as glupload needs to propose an
allocation and set the texture target based on the output caps. Setting the
caps in the config is currently pointless as they are overwritten in a lot of
element's decide_allocation functions.
This provides a mechanism for the buffer pool to be configured for a certain
texture target when none has been configured.
Solved with a simple shader templating mechanism and string replacements
of the necessary sampler types/texture accesses and texture coordinate
mangling for rectangular and external-oes textures.
Add the various tokens/strings for the differnet texture types (2D, rect, oes)
Changes the GLmemory api to include the GstGLTextureTarget in all relevant
functions.
Update the relevant caps/templates for 2D only textures.
The gst_adaptive_demux_stream_free function is trying to stop the stream's
download task. For this, it signals the task. But it fails to also set the
stream->download_finished = TRUE, so the task will go back to sleep and
only exit when the download is finished.
https://bugzilla.gnome.org/show_bug.cgi?id=755121
Initialize to 0 these parse structures before filling them: GstH264SEIMessage,
GstH264NalUnit, GstH264PPS, GstH264SPS and GstH264SliceHdr.
When calling the functions which fill those structures, they may fail, leaving
unitialized those structures. This situation may lead to future problems, such
as a segmentation fault when freeing, for example.
This patch initializes to zero these structures, before filling them.
https://bugzilla.gnome.org/show_bug.cgi?id=755161
Initialize to 0 these parse structures before filling them: GstH265SEIMessage,
GstH265NalUnit, GstH265VPS, GstH265PPS, GstH265SPS and GstH265SliceHdr.
When calling the functions which fill those structures, they may fail, leaving
unitialized those structures. This situation may lead to future problems, such
as a segmentation fault when freeing, for example.
This patch initializes to zero these structures, before filling them.
https://bugzilla.gnome.org/show_bug.cgi?id=755161
the USING_GLES2 includes all GLES3 contexts as well which does support
drawing to multiple buffers. Instead make or decision solely based on
whether glDrawBuffers is available or not.
Not all aggregator subclasses will have a single pad template called sink_%u
and might do something special depending on what the application requests.
https://bugzilla.gnome.org/show_bug.cgi?id=757018
e.g:
gstglcontext_egl.c:613:7: error: implicit declaration of function 'strcmp'
[-Werror=implicit-function-declaration]
if (strcmp (G_MODULE_SUFFIX, "so") == 0)
While technically, i is always 0 and *vertex_sources[i++] is equivalant
to (*vertex_sources)[i++]. Be future-proof in the case of code
moves/changes/etc.
CID 1327406
A GstGLShader is now simply a collection of stages that are
compiled and linked together into a program. The uniform/attribute
interface has remained the same.
1. So we get tracking inside GstElement properly when e.g. adding to a bin
2. Removes redundant code. Now only one place where
GstContext->GstGLDisplay/GstGLContext transformation occurs
3. Fixes a memory leak in the process
4. Make the retrieval of debug categories thread safe
These markers are visible in tools that record the GL function calls
such as apitrace, et al.
Makes it easier to match up GL draw commands with specific elements.
Otherwise they will receive a QOS event that has earliest_time=0 (because we
can't have negative timestamps), and consider their buffer as too late
https://bugzilla.gnome.org/show_bug.cgi?id=754356
- glimagesink needs to be able to resize the viewport on aspect ratio
changes resulting from either caps changes or 3d output mode changes.
- Performing a glViewport outside the GstGLWindow::resize callback
will not have the winsys' stack of viewports required to correctly
place the output frame.
Provide a function to request a resize on the next draw event from the
winsys.
Also track size changes inside the base GstGLWindow class rather
than in each subclass.
https://bugzilla.gnome.org/show_bug.cgi?id=755111
dashdemux seeks each live stream to its current fragment in the beginning, but
the base class does not know about this. Update the demuxer segment with this
seek so we generate the correct SEGMENT event and can actually play the
stream.
This needs some refactoring at some point.
https://bugzilla.gnome.org/show_bug.cgi?id=755047
when allocating memory. Fixes crashes with avdec_h265 in the AVX2
code path which requires 32-byte stride alignment, but the
GstAllocationParams only specified a 16-byte alignment.
https://bugzilla.gnome.org/show_bug.cgi?id=754120
We have to queue buffers based on their running time, not based on
the segment position.
Also return running time from GstAggregator::get_next_time() instead of
a segment position, as required by the API.
Also only update the segment position after we pushed a buffer, otherwise
we're going to push down a segment event with the next position already.
https://bugzilla.gnome.org/show_bug.cgi?id=753196
Each period will start again with pts 0 + period presentation offset, which is
also going to be the presentation time inside the container stream if any.
However all periods together should form a continuous timeline, with regard to
stream time and running time.
For making this possible we keep track of the "user requested segment", i.e.
the seek events, inside the demuxer without adjusting anything and taking this
demuxer segment only as orientation for modified segments per stream.
This per stream segments will have their segment.start at pts that would be
produced for this stream in this period, and the segment.base/time adjusted so
that this pts maps to the running and stream time this period should have in
the context of all other periods.
https://bugzilla.gnome.org/show_bug.cgi?id=754222
Only accept alpha if downstream has alpha as well. It could
theoretically accept alpha unconditionally if blending is
properly implemented for handle it but at the moment this
is a missing feature.
Improves the caps query by also comparing with the template
caps to filter by what the subclass supports.
https://bugzilla.gnome.org/show_bug.cgi?id=754465
If short_term_ref_pic_set_sps_flag is FALSE, the ShortTermRefPicSet
structure is supposed to derive from slice header. Which means,
CurrRpsIdx is equal to num_short_term_ref_pic_sets. But the number
of refpicsets communicated via sps header is only num_short_term_ref_pic_sets - 1.
And we are using slice_header structure to reference the last entry, which is
ShortTermRefPicSet[num_short_term_ref_pic_sets].
https://bugzilla.gnome.org/show_bug.cgi?id=754834
HAVE_IOS is only defined for the build of this module so
attempting to use gstgl in iOS would result in incorrect GL
includes.
Use GST_GL_HAVE_PLATFORM_EAGL instead for choosing the iOS GL
header.
Section 6.5.1: Coding tree block raster and tile scanning conversion process
Follow the equations 6-3 and 6-4
This will provide correct offset_max in slice_header for parsing
num_entry_point_offsets.
https://bugzilla.gnome.org/show_bug.cgi?id=754024
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
As per 7-42 and 7-43 the ScalingFactor's scanIdx is 0,
which is "up-right-diagonal" scan. Add APIs for converting
up-right-diagonal to raster and vise versa.
https://bugzilla.gnome.org/show_bug.cgi?id=754024
Being more strict on specification, According to 7.4.7.3,
delta_chroma_log2_weight_denom should be in the range of
[(0 - luma_log2_weight_denom), (7 - luma_log2_weight_denom)]
https://bugzilla.gnome.org/show_bug.cgi?id=754024
During allocation query, when this element is not passthrough, it must
relay the overlay compostion meta and it's parameters. Fortunatly, base
transform can do this for us.
https://bugzilla.gnome.org/show_bug.cgi?id=753850
GL_SHADING_LANGUAGE_VERSION was introduced since ES 2.0, but in some
android emulator doesn't support this feature. To prevent confusion for
developer, the error message need to be more clear.
https://bugzilla.gnome.org/show_bug.cgi?id=753905
Removes the redundant GL object creation/deletion on every
decide_allocation call which is being called for every caps change.
Thus reduces the required GL state changes on reconfigure events
which are being sent by glimagesink/xvimagesink
Instead of checking for the gstreamer-video-1.0 package is installed,
just assume it is since we already check for the -base dependency.
With this replace the GST_VIDEO_* variables in makefiles and directly
link with libgstvideo.
https://bugzilla.gnome.org/show_bug.cgi?id=753820
As we only expose the mapped portion of the frame into the GL
memory object (and not the original padding) we need to
re-calculate the size and offset.
There are several cases where a HLS server could temporarily have wrong
fragments, or reconfigure the playlist. In those cases, when we get
fragment download failures, we *really* want to wait a bit (for the next
playlist update) before retrying to get fragments.
Previously this method was first checking to see if there was next fragments
(according to the previous manifest update) before waiting for the next update.
The problem was that if that if there is a temporary failure on the server,
that's uncorrelated to whether the manifest contains next fragments or not.
The SPS struct might be filled out by a call to
gst_h264_parser_parse_subset_sps, which fills out
dynamically allocated data and requires a call
to gst_h264_sps_clear() to free it. Also make sure
to clear out any allocated SPS data when returning
an error.
https://bugzilla.gnome.org/show_bug.cgi?id=753306
The urn:mpeg:dash:utc:http-head:2014 method of time synchronisation
uses an HTTP HEAD request to a specified URL and then parses the
Date: HTTP response header.
This commit adds support to dashdemux for this method of time
synchronisation by making a HEAD request and then parsing the Date:
response.
This commit adds support to gstfragment to return the HTTP headers
and to uridownloader to support HEAD requests. To avoid creating a
new API, the RANGE get function is re-used (abused?) with start=-1
and end=-1 to indicate a HEAD request.
https://bugzilla.gnome.org/show_bug.cgi?id=752413
ChromaLog2WeightDenom = luma_log2_weight_denom + delta_chroma_log2_weight_denom
The value of ChromaLog2WeightDenom should be in the range of 0 to 7 and
the value luma_log2_weight_denom should be also in the range of 0 to 7.
Which means , delta_chroma_log2_weight_denom can have values in the range
between -7 and 7.
https://bugzilla.gnome.org/show_bug.cgi?id=753552
Adding this private header to the list of sources. We don't want to make
this header public, but we need it in the list of sources otherwise it
won't be included in the tarball. This fixes make distcheck.
This regression was introduced by commit 1a6fe3db
Depending on the bytes order we will get BGRA (little) and ARGB (big)
from the composition overlay buffer while our GL code expects RGBA. Add
a fragment shader that do this conversion.
https://bugzilla.gnome.org/show_bug.cgi?id=752842
In the case where you have a source giving the GstAggregator smaller
buffers than it uses, when it reaches a timeout, it will consume the
first buffer, then try to read another buffer for the pad. If the
previous element is not fast enough, it may get the next buffer even
though it may be queued just before. To prevent that race, the easiest
solution is to move the queue inside the GstAggregatorPad itself. It
also means that there is no need for strange code cause by increasing
the min latency without increasing the max latency proportionally.
This also means queuing the synchronized events and possibly acting
on them on the src task.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
We can't know if the GstGLUpload type is initialized at this point already,
and thus our debug category might not be initialized yet... and cause an
assertion here.
As we don't print debug output for any of the other transform functions, let's
defer this problem for now.
Before aggregator based elements always started at running time 0,
now it's possible to select the first input buffer running time or
explicitly set a start-time value.
https://bugzilla.gnome.org/show_bug.cgi?id=749966
Adding a pad will add a new upstream that might have a bigger minimum latency,
so we might have to wait longer. Or it might be the first live upstream, in
which case we will have to start deadline based aggregation.
Removing a pad will remove a new upstream that might have had the biggest
latency, so we can now stop waiting a bit earlier. Or it might be the last
live upstream, in which case we can stop deadline based aggregation.
The coordinate are relative to the texture dimension and not
the window dimension now. There is no need to pass the window
dimension or to update the overlay if the dimension changes.
https://bugzilla.gnome.org/show_bug.cgi?id=745107
If a ContentProtection element is present in an AdaptationSet element,
send Protection events on the source pad, so that qtdemux can use this
information to correctly generate its source caps for DASH CENC
encrypted streams.
This allows qtdemux to support CENC encrypted DASH streams where the
content protection specific information is carried in the MPD file
rather than in pssh boxes in the initialisation segments.
This commit adds a new function to the adaptivedemux base class to allow
a GstEvent to be queued for a stream. The queue of events are sent the
next time a buffer is pushed for that stream.
https://bugzilla.gnome.org/show_bug.cgi?id=705991
They require to get_proc_address some functions through the
platform specific {glX,egl}GetProcAddress rather than the default
GL library symbol lookup.
nvidia drivers return the exact version in glGstString (GL_VERSION)
we request on creation so start with the highest known version and
work our way down.
The previous approach of traversing the other_context weak ref tree was
1. Less performant
2. Incorrect for context destruction removing a link in the tree
Example of 2:
c1 = context_create (NULL)
c2 = context_create (c1)
c3 = context_create (c2)
context_can_share (c1, c3) == TRUE
context_destroy (c2)
unref (c2)
context_can_share (c1, c3) returns FALSE when it should be TRUE!
This does not remove the restriction that context sharedness can only
be tracked between GstGLContext's.
This will deadlock if the main thread is the one who creates the GstGLContext.
All things we call from the main thread should be possible from any thread.
https://bugzilla.gnome.org/show_bug.cgi?id=751101
Sometimes the last fragment does not exist because of rounding errors with the
durations. Just finish the stream gracefully instead of erroring out instead.