The messages are stored by gst_gl_async_debug_store_log_msg() and output later
by a corresponding store(), output() or an unset()/free().
Some wrapper macros are provided to avoid callers explicitly using __FILE__,
GST_FUNCTION and __LINE__
This makes a pipeline like:
... ! video/x-raw(memory:GLMemory),format=UYVY ! glcolorconvert !
video/x-raw(memory:GLMemory),format={UYVY, NV12} ! ...
passthrough instead of converting UYVY => NV12. The conversion would happen
before this change since the element (and basetransform) transform the src caps
to format={NV12, UYVY} (since NV12 comes first in the glcolorconvert:src
template) and then the default caps fixate func would fixate to NV12. Blah.
Also there's no need to intersect against the template caps in ::transform_caps
since basetransform does that right after calling the vfunc.
The optimistic download_transfer was not setting the required flag to not
perform glReadPixels on subsequent map (READ). resulting in glReadPixels
happening twice.
Fixed adaptivedemux seeking without flushing that just wants
to update stop position. This required protecting the segment
variables with a new mutex so that the seeking thread and the
download threads could safely manipulate the segment and
events related to it.
This contention is only locked/unlocked when starting a new
download, when the first fragment of a segment is received and
when seeking so, hopefully, it won't damage performance.
Some operations are unnecessary when running with only a single GL
context.
e.g. glFlush when setting a fence object as the flush happens on wait.
API: gst_gl_context_is_shared
Avoids downloading and pushing a full segment just to get 1 nanosecond
of data. This happens frequently when seeking is done with flags
that adjust to boundaries or when the start is aligned with segment
starts. The later is common when segment durations is a multiple of
a second.
For reverse, set position to segment.stop when starting and also
don't set the position to fragment end timestamp when it finishes,
just leave it at the fragment start.
1. Various elements/base classes only perform a subset check on accept-caps
2. Some GL elements have texture-target in their pad template
3. When checking subsets, only the caps to check are allowed to contain extra
fields. If the 'template' caps have extra fields, the subset fails.
Thus without texture-target on the caps, various accept-caps implementations
were failing.
Also, add some convenience functions for setting and retrieving
texture targets to/from GValue.
https://bugzilla.gnome.org/show_bug.cgi?id=759860
In very few cases the simple version was actually needed and having the
parameters hidden by a _full() version caused application that actually needed
it to not use it.
GL 2.1 only supports pbo upload.
The wrapped data pointer was only being set on the pbo memory and on the
glmemory so when a download was requested (in GL 2.1), glmemory was
allocating a new data pointer and thus not returning the wrapped data.
Exposing the navigation thread's main context, GSourceFuncs and structs called
key_event and mouse_event is exposing a bit too much of the internals. Let's
just go with two functions to asynchronously send navigation events on the
window with the same API as the synchronous ones.
This upload method detect and optimize uploads of DMABuf memory. This is
done by creating and caching EGLImages wrapper around DMABuf. The
EGLImages are then binded to a texture which get converter using
standard shader.
Example pipeline:
GST_GL_PLATFORM=egl \
gst-launch-1.0 v4l2src device=/dev/video1 io-mode=4 ! \
video/x-raw,format=NV12 ! glimagesink
https://bugzilla.gnome.org/show_bug.cgi?id=743345
Maps GstVideoFormats to suitable DRM fourccs which work with
glcolorconvert, using gst_gl_memory_alloc(). We require mostly
only 4 formats to be supported by the driver. We require DRM
equivalent to RGB16, RGBA, R8 and RG88. This way it's compatible with
DesktopGL, since GL_TEXTURE_2D is used and limit driver requirements.
With this we can virtually support all formats the glcolorconvert
supports.
https://bugzilla.gnome.org/show_bug.cgi?id=743345
Add gst_gl_memory_allocator_get_default to get the default allocator based on
the opengl version. Allows us to stop hardcoding the PBO allocator which isn't
supported on gles2.
Fixes GL upload on iOS9 among other things.
e.g when wrapping a data pointer we don't want to map/unmap off the end of
pointer with the alignment bytes.
Instead track that information separately as maxsize is used for mapping by
GstMemory and thus represents a size without any alignment padding bytes.
Requires the usage of GstGLVideoAllocationParams however any user can set their
own parameters along with an allocator which will be used to allocate the
correct memory type.
- Create GstGLVideoAllocationParams which is a GstGLAllocationParams subclass.
- Make it possible to allocate glmemory objects directly if no frills are
needed.
This is made possible by a subclassable GstGLAllocationParams that holds
the allocation parameters
Every allocation would now go through gst_gl_base_memory_alloc with the
allocation parameters now being specified in a single struct to allow
extension by different allocators.
The imported memory has already been allocated, passing allocation
parameters with alignment confuses the memory which endup with a
size different from maxsize and lead to overrun when the memory
is being copied.
GCC automatically disable redundance warnings for system headers. As
soon as we start using a non-system installed mesa, we would start
having issues. The test for both wasn't setting any flags, so it would
work but then fail at runtime.
This is being fixed by disabling in the code (where needed only) that
GCC warning. The test is also fixed to avoid the false positive we had.
The base class is useful for having multiple backing memory types other
than the default. e.g. IOSurface, EGLImage, dmabuf?
The PBO transfer logic is now inside GstGLMemoryPBO which uses GstGLBuffer
to manage the PBO memory.
This also moves the format utility functions into their own file.
Heavily based on GstGLBaseBuffer that is a subclass of GstGLBaseMemory.
Provides GPU and CPU accessible GL buffer objects by GL handle or by
sysmem data pointer.
It handles the following
- GstAllocationParams -> gst_memory_init transformation
- Makes sure that map/unmap/create/destroy happen on the GL thread with
a GL context current.
- Holds a possible sysmem accessible data pointer with alignment.
- Holds the need upload/download transfer state
Rectangle textures don't use normalized coordinates so subsampling needs to be
factored in explicitly.
Fixes YUV => RGB conversion for rectangle textures.
To use GLMemory and EGLImage allocators, one need to know the
libgstgl API. This is only expected if the associated caps features
have been negotiated. Generic element that otherwise receive those
allocators may fail, resulting in broken pieline. We don't want to
force all generic element to check if the allocator is a custom
allocator or a normal allocator (which implement the _alloc method).
https://bugzilla.gnome.org/show_bug.cgi?id=758877
gstglsyncmeta.c -fPIC -DPIC -o .libs/libgstgl_1.0_la-gstglsyncmeta.o
gstglsyncmeta.c: In function 'gst_buffer_add_gl_sync_meta':
gstglsyncmeta.c:131:1: error: control reaches end of non-void function [-Werror=return-type]
}
^
there could be other ways/requirements for synchronising two GPU command
streams (whether GL or platform specific).
e.g. glfencesync/eglwaitnative/cond/etc
This no longer does anything, and it was marked as CONSTRUCT_ONLY
which means someone would really have to go out of their way to
be able to set this, which would only be done in very custom
scenarios, if ever, and those will likely target a specific
version of GStreamer then, so probably not much point keeping
it deprecated for a while before removing it.
given a NULL-terminated string, s.
s[i] = '\0';
i++;
does not guarentee that s[i] is NULL terminated and thus string operations
could read off the end of the array.
https://bugzilla.gnome.org/show_bug.cgi?id=758039
gst_gl_shader_detach_unlocked already removes the list entry so attempting to
use the element to iterate to the next stage could read invalid data.
Based on patch by Vineeth TM <vineeth.tm@samsung.com>
https://bugzilla.gnome.org/show_bug.cgi?id=758039
Some drivers don't provide the compatibility definition and we need to provide
our own 'out vec4' variable to put the results of the fragment shader into.
https://bugzilla.gnome.org/show_bug.cgi?id=757938
Rectangular textures are unavailable in unextended
GLES2 #version 100 shaders.
Fixes
texture-target=rectangle ! glcolorconvert ! texture-target=2D
There's a couple of differences between GL3 and GLES2/GL
- varying -> in or out depending on the stage (vertex/fragment)
- attribute -> in
- filtered texture access is a single function, texture()
Bitrate estimation is now handled through a queue2 element added after
the source element used to download fragments.
Original hlsdemux patch by Duncan Palmer <dpalmer@digisoft.tv>
https://bugzilla.gnome.org/show_bug.cgi?id=733959
This code will never be called as max>=min in all cases. If the upstream
latency query returned min>max, the function already returned and all
values that are added to those have max>= min.
Make gst_gl_context_gen_shader/_compile_shader assume GST_GLSL_PROFILE_ES |
GST_GLSL_PROFILE_COMPATIBILITY as the profile. Without this, the shader compiler
doesn't inject the #version tag resulting in a compilation error on Mountain
Lion.
This is a workaround for old code using gst_gl_context_gen_shader. New code
should use the gst_glsl_stage_* API directly which allows the caller to
explicitly specify version/profile.
The ret variable may be uninitialized and so its contents were undefined and
the results were erratic (failing with glvideomixer, succeeding in other cases)
P.S. No idea why gcc/clang et al never picked up on this like they normally do
(probably due to some optimisation pass figuring out it's only set once...)
They are already defined in the forward decleration header and defining them
more than once will give an error with OSX's clang about typedef redefinition
being a C11 feature.
This was chosen over relying solely on the caps as glupload needs to propose an
allocation and set the texture target based on the output caps. Setting the
caps in the config is currently pointless as they are overwritten in a lot of
element's decide_allocation functions.
This provides a mechanism for the buffer pool to be configured for a certain
texture target when none has been configured.
Solved with a simple shader templating mechanism and string replacements
of the necessary sampler types/texture accesses and texture coordinate
mangling for rectangular and external-oes textures.
Add the various tokens/strings for the differnet texture types (2D, rect, oes)
Changes the GLmemory api to include the GstGLTextureTarget in all relevant
functions.
Update the relevant caps/templates for 2D only textures.
The gst_adaptive_demux_stream_free function is trying to stop the stream's
download task. For this, it signals the task. But it fails to also set the
stream->download_finished = TRUE, so the task will go back to sleep and
only exit when the download is finished.
https://bugzilla.gnome.org/show_bug.cgi?id=755121
Initialize to 0 these parse structures before filling them: GstH264SEIMessage,
GstH264NalUnit, GstH264PPS, GstH264SPS and GstH264SliceHdr.
When calling the functions which fill those structures, they may fail, leaving
unitialized those structures. This situation may lead to future problems, such
as a segmentation fault when freeing, for example.
This patch initializes to zero these structures, before filling them.
https://bugzilla.gnome.org/show_bug.cgi?id=755161
Initialize to 0 these parse structures before filling them: GstH265SEIMessage,
GstH265NalUnit, GstH265VPS, GstH265PPS, GstH265SPS and GstH265SliceHdr.
When calling the functions which fill those structures, they may fail, leaving
unitialized those structures. This situation may lead to future problems, such
as a segmentation fault when freeing, for example.
This patch initializes to zero these structures, before filling them.
https://bugzilla.gnome.org/show_bug.cgi?id=755161
the USING_GLES2 includes all GLES3 contexts as well which does support
drawing to multiple buffers. Instead make or decision solely based on
whether glDrawBuffers is available or not.
Not all aggregator subclasses will have a single pad template called sink_%u
and might do something special depending on what the application requests.
https://bugzilla.gnome.org/show_bug.cgi?id=757018
e.g:
gstglcontext_egl.c:613:7: error: implicit declaration of function 'strcmp'
[-Werror=implicit-function-declaration]
if (strcmp (G_MODULE_SUFFIX, "so") == 0)
While technically, i is always 0 and *vertex_sources[i++] is equivalant
to (*vertex_sources)[i++]. Be future-proof in the case of code
moves/changes/etc.
CID 1327406
A GstGLShader is now simply a collection of stages that are
compiled and linked together into a program. The uniform/attribute
interface has remained the same.
1. So we get tracking inside GstElement properly when e.g. adding to a bin
2. Removes redundant code. Now only one place where
GstContext->GstGLDisplay/GstGLContext transformation occurs
3. Fixes a memory leak in the process
4. Make the retrieval of debug categories thread safe
These markers are visible in tools that record the GL function calls
such as apitrace, et al.
Makes it easier to match up GL draw commands with specific elements.
Otherwise they will receive a QOS event that has earliest_time=0 (because we
can't have negative timestamps), and consider their buffer as too late
https://bugzilla.gnome.org/show_bug.cgi?id=754356
- glimagesink needs to be able to resize the viewport on aspect ratio
changes resulting from either caps changes or 3d output mode changes.
- Performing a glViewport outside the GstGLWindow::resize callback
will not have the winsys' stack of viewports required to correctly
place the output frame.
Provide a function to request a resize on the next draw event from the
winsys.
Also track size changes inside the base GstGLWindow class rather
than in each subclass.
https://bugzilla.gnome.org/show_bug.cgi?id=755111
dashdemux seeks each live stream to its current fragment in the beginning, but
the base class does not know about this. Update the demuxer segment with this
seek so we generate the correct SEGMENT event and can actually play the
stream.
This needs some refactoring at some point.
https://bugzilla.gnome.org/show_bug.cgi?id=755047
when allocating memory. Fixes crashes with avdec_h265 in the AVX2
code path which requires 32-byte stride alignment, but the
GstAllocationParams only specified a 16-byte alignment.
https://bugzilla.gnome.org/show_bug.cgi?id=754120
We have to queue buffers based on their running time, not based on
the segment position.
Also return running time from GstAggregator::get_next_time() instead of
a segment position, as required by the API.
Also only update the segment position after we pushed a buffer, otherwise
we're going to push down a segment event with the next position already.
https://bugzilla.gnome.org/show_bug.cgi?id=753196
Each period will start again with pts 0 + period presentation offset, which is
also going to be the presentation time inside the container stream if any.
However all periods together should form a continuous timeline, with regard to
stream time and running time.
For making this possible we keep track of the "user requested segment", i.e.
the seek events, inside the demuxer without adjusting anything and taking this
demuxer segment only as orientation for modified segments per stream.
This per stream segments will have their segment.start at pts that would be
produced for this stream in this period, and the segment.base/time adjusted so
that this pts maps to the running and stream time this period should have in
the context of all other periods.
https://bugzilla.gnome.org/show_bug.cgi?id=754222
Only accept alpha if downstream has alpha as well. It could
theoretically accept alpha unconditionally if blending is
properly implemented for handle it but at the moment this
is a missing feature.
Improves the caps query by also comparing with the template
caps to filter by what the subclass supports.
https://bugzilla.gnome.org/show_bug.cgi?id=754465
If short_term_ref_pic_set_sps_flag is FALSE, the ShortTermRefPicSet
structure is supposed to derive from slice header. Which means,
CurrRpsIdx is equal to num_short_term_ref_pic_sets. But the number
of refpicsets communicated via sps header is only num_short_term_ref_pic_sets - 1.
And we are using slice_header structure to reference the last entry, which is
ShortTermRefPicSet[num_short_term_ref_pic_sets].
https://bugzilla.gnome.org/show_bug.cgi?id=754834
HAVE_IOS is only defined for the build of this module so
attempting to use gstgl in iOS would result in incorrect GL
includes.
Use GST_GL_HAVE_PLATFORM_EAGL instead for choosing the iOS GL
header.
Section 6.5.1: Coding tree block raster and tile scanning conversion process
Follow the equations 6-3 and 6-4
This will provide correct offset_max in slice_header for parsing
num_entry_point_offsets.
https://bugzilla.gnome.org/show_bug.cgi?id=754024
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
As per 7-42 and 7-43 the ScalingFactor's scanIdx is 0,
which is "up-right-diagonal" scan. Add APIs for converting
up-right-diagonal to raster and vise versa.
https://bugzilla.gnome.org/show_bug.cgi?id=754024
Being more strict on specification, According to 7.4.7.3,
delta_chroma_log2_weight_denom should be in the range of
[(0 - luma_log2_weight_denom), (7 - luma_log2_weight_denom)]
https://bugzilla.gnome.org/show_bug.cgi?id=754024
During allocation query, when this element is not passthrough, it must
relay the overlay compostion meta and it's parameters. Fortunatly, base
transform can do this for us.
https://bugzilla.gnome.org/show_bug.cgi?id=753850
GL_SHADING_LANGUAGE_VERSION was introduced since ES 2.0, but in some
android emulator doesn't support this feature. To prevent confusion for
developer, the error message need to be more clear.
https://bugzilla.gnome.org/show_bug.cgi?id=753905
Removes the redundant GL object creation/deletion on every
decide_allocation call which is being called for every caps change.
Thus reduces the required GL state changes on reconfigure events
which are being sent by glimagesink/xvimagesink
Instead of checking for the gstreamer-video-1.0 package is installed,
just assume it is since we already check for the -base dependency.
With this replace the GST_VIDEO_* variables in makefiles and directly
link with libgstvideo.
https://bugzilla.gnome.org/show_bug.cgi?id=753820
As we only expose the mapped portion of the frame into the GL
memory object (and not the original padding) we need to
re-calculate the size and offset.
There are several cases where a HLS server could temporarily have wrong
fragments, or reconfigure the playlist. In those cases, when we get
fragment download failures, we *really* want to wait a bit (for the next
playlist update) before retrying to get fragments.
Previously this method was first checking to see if there was next fragments
(according to the previous manifest update) before waiting for the next update.
The problem was that if that if there is a temporary failure on the server,
that's uncorrelated to whether the manifest contains next fragments or not.
The SPS struct might be filled out by a call to
gst_h264_parser_parse_subset_sps, which fills out
dynamically allocated data and requires a call
to gst_h264_sps_clear() to free it. Also make sure
to clear out any allocated SPS data when returning
an error.
https://bugzilla.gnome.org/show_bug.cgi?id=753306
The urn:mpeg:dash:utc:http-head:2014 method of time synchronisation
uses an HTTP HEAD request to a specified URL and then parses the
Date: HTTP response header.
This commit adds support to dashdemux for this method of time
synchronisation by making a HEAD request and then parsing the Date:
response.
This commit adds support to gstfragment to return the HTTP headers
and to uridownloader to support HEAD requests. To avoid creating a
new API, the RANGE get function is re-used (abused?) with start=-1
and end=-1 to indicate a HEAD request.
https://bugzilla.gnome.org/show_bug.cgi?id=752413
ChromaLog2WeightDenom = luma_log2_weight_denom + delta_chroma_log2_weight_denom
The value of ChromaLog2WeightDenom should be in the range of 0 to 7 and
the value luma_log2_weight_denom should be also in the range of 0 to 7.
Which means , delta_chroma_log2_weight_denom can have values in the range
between -7 and 7.
https://bugzilla.gnome.org/show_bug.cgi?id=753552
Adding this private header to the list of sources. We don't want to make
this header public, but we need it in the list of sources otherwise it
won't be included in the tarball. This fixes make distcheck.
This regression was introduced by commit 1a6fe3db
Depending on the bytes order we will get BGRA (little) and ARGB (big)
from the composition overlay buffer while our GL code expects RGBA. Add
a fragment shader that do this conversion.
https://bugzilla.gnome.org/show_bug.cgi?id=752842
In the case where you have a source giving the GstAggregator smaller
buffers than it uses, when it reaches a timeout, it will consume the
first buffer, then try to read another buffer for the pad. If the
previous element is not fast enough, it may get the next buffer even
though it may be queued just before. To prevent that race, the easiest
solution is to move the queue inside the GstAggregatorPad itself. It
also means that there is no need for strange code cause by increasing
the min latency without increasing the max latency proportionally.
This also means queuing the synchronized events and possibly acting
on them on the src task.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
We can't know if the GstGLUpload type is initialized at this point already,
and thus our debug category might not be initialized yet... and cause an
assertion here.
As we don't print debug output for any of the other transform functions, let's
defer this problem for now.
Before aggregator based elements always started at running time 0,
now it's possible to select the first input buffer running time or
explicitly set a start-time value.
https://bugzilla.gnome.org/show_bug.cgi?id=749966
Adding a pad will add a new upstream that might have a bigger minimum latency,
so we might have to wait longer. Or it might be the first live upstream, in
which case we will have to start deadline based aggregation.
Removing a pad will remove a new upstream that might have had the biggest
latency, so we can now stop waiting a bit earlier. Or it might be the last
live upstream, in which case we can stop deadline based aggregation.
The coordinate are relative to the texture dimension and not
the window dimension now. There is no need to pass the window
dimension or to update the overlay if the dimension changes.
https://bugzilla.gnome.org/show_bug.cgi?id=745107
If a ContentProtection element is present in an AdaptationSet element,
send Protection events on the source pad, so that qtdemux can use this
information to correctly generate its source caps for DASH CENC
encrypted streams.
This allows qtdemux to support CENC encrypted DASH streams where the
content protection specific information is carried in the MPD file
rather than in pssh boxes in the initialisation segments.
This commit adds a new function to the adaptivedemux base class to allow
a GstEvent to be queued for a stream. The queue of events are sent the
next time a buffer is pushed for that stream.
https://bugzilla.gnome.org/show_bug.cgi?id=705991
They require to get_proc_address some functions through the
platform specific {glX,egl}GetProcAddress rather than the default
GL library symbol lookup.
nvidia drivers return the exact version in glGstString (GL_VERSION)
we request on creation so start with the highest known version and
work our way down.
The previous approach of traversing the other_context weak ref tree was
1. Less performant
2. Incorrect for context destruction removing a link in the tree
Example of 2:
c1 = context_create (NULL)
c2 = context_create (c1)
c3 = context_create (c2)
context_can_share (c1, c3) == TRUE
context_destroy (c2)
unref (c2)
context_can_share (c1, c3) returns FALSE when it should be TRUE!
This does not remove the restriction that context sharedness can only
be tracked between GstGLContext's.
This will deadlock if the main thread is the one who creates the GstGLContext.
All things we call from the main thread should be possible from any thread.
https://bugzilla.gnome.org/show_bug.cgi?id=751101
Sometimes the last fragment does not exist because of rounding errors with the
durations. Just finish the stream gracefully instead of erroring out instead.
Segment start/time/position/base should only be modified if this is the first
time we send a segment, otherwise we will override values from the seek
segment if new streams have to be exposed as part of the seek.
Segment base should be calculated from the segment start based on the stream's
own segment, not the demuxer's segment. Both might differ slightly because of
the presentationTimeOffset.
Always add the presentationTimeOffset (relative to the period start, not
timestamp 0) to the segment start after resetting the stream's segment based
on the demuxer's segment (i.e. after seeks or stream restart). Also make sure
to keep the stream's segment up to date and not just send a new segment event
without storing the segment in the stream.
https://bugzilla.gnome.org/show_bug.cgi?id=745455
And include the presentation offset in the last known position for each
stream, and just because we can also keep track of the latest known position
inside the demuxer segment.
It's going to return EOS if the period ended or otherwise there is just no
next fragment left. If we don't store the last return value, it will always
stay OK and gst_adaptive_demux_combine_flows() will always return OK instead
of EOS once all streams are done.
This partially switches period changes in DASH by at least trying to switch
instead of just stopping. What is still left is that after a period change
with DASH the times all start at 0 again instead of continuing.
The problem here was that after removing the formats and
all the things we could convert, we then intersected these
caps with the template caps.
Hence if a subclass offered permissive sink templates
(eg all the possible formats videoconvert handles), but only
one output format, then at negotiation time getcaps returned
caps with the format restricted to that format, even though
we do handle conversion.
https://bugzilla.gnome.org/show_bug.cgi?id=751255
- add data pointer to GstJpegSegment and pass segment
to all parsing functions, rename accordingly
- shorten GstJpegMarkerCode enum type name to GstJpegMarker
- move function gtk-doc blurbs into .c file
- add since markers
- flesh out docs for SOF markers
https://bugzilla.gnome.org/show_bug.cgi?id=673925
Fix scan for next marker code when there is an odd number of filler
(0xff) bytes before the actual marker code. Also optimize the loop
to execute with fewer instructions (~10%).
This fixes parsing for Spectralfan.mov.
The size of a marker segment is defined to be exclusive of any initial
marker code. So, fix the size for SOI, EOI and APPn segments but also
the size of any possible segment that is usually "reserved" or not
explicitly defined.
https://bugzilla.gnome.org/show_bug.cgi?id=707447
Add API for a helper object that can convert between different
stereoscopic video representations, and later do filtering
of multiple view streams.
https://bugzilla.gnome.org/show_bug.cgi?id=611157
Switch the increment of markersize from when it is used to when it is
returned from compute_resync_marker_size.
This also makes the CHECK_REMAINING in gst_mpeg4_parse_video_packet_header
check for the actually required number of bits now and not one too few.
https://bugzilla.gnome.org/show_bug.cgi?id=739345
This reverts commit 916b954315.
Clearly something else was intended, and it also makes
more sense to add the extra bit. The resync marker is
N zero bits plus a 1 bit, and the pattern/mask needs to
be run on N+1 bits too.
(Even after the rever the code doesn't do that of course, so
it still needs to be fixed differently.)
https://bugzilla.gnome.org/show_bug.cgi?id=739345
When set, it causes videoaggregator to repeatedly aggregate the last buffer on
an EOS pad instead of skipping it and outputting silence. This is useful, for
instance, while playing back files seamless one after the other, to avoid
videoaggregator ever outputting silence (the checkerboard pattern).
It is to be noted that if all the pads on videoaggregator have this property set
on them, the mixer will never forward EOS downstream for obvious reasons. Hence,
at least one pad with 'ignore-eos' set to FALSE must send EOS to the mixer
before it will be forwarded downstream.
https://bugzilla.gnome.org/show_bug.cgi?id=748946
Short sections have 3 bytes of common header, while other sections
have 8 bytes of common header. If packetizing common header of short
section, we should stop after the first 3 bytes.
https://bugzilla.gnome.org/show_bug.cgi?id=735653
Provides generic handling of GL buffer objects accessible using
the GL bind points (GL_ARRAY_BUFFER, GL_PIXEL_*_BUFFER).
Implementation based off the current GstGLMemory.
5697b6b89b causes us to possibly listen
on a toolkit provided Display connection. We thus could eat their
precious winsys events. Only listen if we need to
(!foreign_display or videooverlay).
It's true that we shouldn't consider errors fatal immediately, but if we
always ignore them we will loop infinitely on live streams with segments
that can't be downloaded at all.
Add a vfunc that is called by glfilter before it sets
caps features and intersects with the peer caps, and
move removing the size from caps into its default
implementation. Allows sub-classes to do more
sophisticated management of the size fields in case they
don't support arbitrary resizing or have distinct
preferences.
Add preserve_update_caps_result boolean on the class to allow
sub-classes to disable videoaggregator removing sizes and framerate
from the update_caps() return result.
A return value of GST_FLOW_OK with a NULL buffer from get_output_buffer()
means the sub-class doesn't want to produce an output buffer, so
skip it.
If gst_videoaggregator_do_aggregate() generates an error, make sure
to propagate it - don't just ignore and discard the error by
over-writing it with the gst_pad_push() result.
Previously when compiling GstGL with both GL and GLES2,
GL_RGBA8 was picked from GL/gl.h. But a clash may happen at
runtime when one is selecting GLES2.
gst_gl_internal_format_rgba allows to check at runtime
if it should use GL_RGBA or GL_RGBA8.
Simple implementation split from GstGLWindowWayland
Can now have multiple glimagesink elements all displaying output
linked via GL or otherwise (barring GL platform limitations).
The intel driver is racy and can crash setting up the two glimagesink contexts.
e.g.
videotestsrc ! tee name=t ! queue ! glupload ! glimagesinkelement
t. ! queue ! gleffects_blur ! glimagesinkelement
videotestsrc ! glupload ! glfiltercube ! tee name=t ! queue ! glimagesinkelement
t. ! queue ! gleffects_blur ! glimagesinkelement
Otherwise we could end up being mistaken for the diference between a
gl3 and a gl2 context resulting in a failure getting the list of
extensions from the wrapped context due to the difference between
glGetString and glGetStringi for the GL_EXTENSIONS token.
https://bugzilla.gnome.org/show_bug.cgi?id=749728
When called from gst_gl_window_win32_close(), internal window
could not exist, and if it does it's going to be destroyed just
after that anyway. Also it causes window_proc() to be called
and crash because it gets a NULL context.
When called from gst_gl_window_win32_set_window_handle() we are
going to set another parent anyway, and it's probably better to
reparent directly instead of passing by a NULL parent which could
cause the internal window to popup briefly.
https://bugzilla.gnome.org/show_bug.cgi?id=749601
gst_gl_context_finalize() is calling gst_gl_window_win32_quit()
which was posting a message. But then window_proc takes window's
context and get a NULL.
Now that we've got a GMainLoop we can do like other backends and
simply call g_main_loop_quit().
This also remove duplicated code to release the parent window and
potential crash there because parent_proc could be NULL if we never
created the internal window. That could happen for example if setting
state to READY then setting a window_handle, and go back to NULL state.
https://bugzilla.gnome.org/show_bug.cgi?id=749601
gst_gl_window_win32_send_message_async() could be called before the
internal window is created so we cannot use PostMessage there.
x11 and wayland backends both create a custom GSource for this,
so there is no reason to not do that for win32.
https://bugzilla.gnome.org/show_bug.cgi?id=749601
Otherwise it could stay client side without being submitted to the GL
server resulting in another context waiting on a Fence that will never
become signalled causing a deadlock.
Make the passthrough check contingent on only the fields we
can modify being unchanged, and pre-compute it when caps
change instead of checking on each buffer. Makes the passthrough
more lenient if consumers are lax about making input and output
caps complete.
The EOS and EOB nals have the size 2 which is the size of
nal unit header itself. The gst_h265_parser_identify_nalu()
is not required to scan start code again in this case.
In other cases, for a valid nalunit the minimum required size
is 3 bytes (2 byte header and at least 1 byte RBSP payload)
Skip the byte alignment bits as per the logic of byte_alignment()
provided in hevc specification. This will fix the calculation of
slice header size.
https://bugzilla.gnome.org/show_bug.cgi?id=747613
Even for "live" streams we are not live in the GStreamer meaning of the word.
We don't produce buffers that are timestamped based on their "capture time"
and our clock, but just based on whatever timestamps the stream might contain.
Also even if we wanted to claim to be live, that wouldn't work well as we
would have to return GST_STATE_CHANGE_NO_PREROLL when going from READY to
PAUSED, which we can't. We first need data to know if we are "live" or not.
It will deadlocks as we will then join() the update task from itself. Instead
just post an actual error message on the bus and only stop the update task.
The application is then responsible for shutting down the element, and thus
all the other tasks and everything, based on the error message it gets.
This would've also triggered if for some reason the segment was updated
in such a way that PTS went backwards, but the running time increased. Like
what happens when non-flushing seeks are done.
We're doing a proper buffer-from-the-past check a few lines below based on the
running time, which is the only time we should care about here.
And keep on querying upstream until we get a reply.
Also, the _get_latency_unlocked() method required being calld
with a private lock, so removed the _unlocked() variant from the API.
And it now returns GST_CLOCK_TIME_NONE when the element is not live as
we think that 0 upstream latency is possible.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
It might return OK from subclasses and it could cause a bitrate
renegotiation. For DASH and MSS that is ok as they won't expose
new pads as part of this but it can cause issues for HLS as
it will expose new pads, leading to pads that will only have EOS
that cause decodebin to fail
https://bugzilla.gnome.org/show_bug.cgi?id=745905
Show the DispmanX window only if there's no shared external GL context
set up. When a window is required by the context a transparent
DispmanX element is created and later on made visible by the ::show
method.
https://bugzilla.gnome.org/show_bug.cgi?id=746632
In some upload implementations the out buffer has more than one references,
turning the buffer not writable, so it won't be possible to modify its
meta-data.
This patch moves the meta-data copy before increasing the reference of the out
buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=746173
Asks the subclass for a potential time offset to apply to each
separate stream, in dash streams can have "presentation time offsets",
which can be different for each stream.
https://bugzilla.gnome.org/show_bug.cgi?id=745455
Chaining a downstream pool would lead to two owner of the same
pool. In dynamic pipeline, if one owner is removed from the pipeline
the pool will be stopped, and the rest of the pipeline will fail
since the pool will now be flushing. Also fix proposed pool caching,
filter->pool was never set, never unrefed.
https://bugzilla.gnome.org/show_bug.cgi?id=745705
In case the original caps were missing some optional fields like
interlace-mode. We assume default values for those everywhere,
but they can still cause negotiation to fail if a downstream element
expects the field to be there and at a specific value.
Otherwise the pipeline stalls when running
more than one glimagesink with gst-launch.
Also only register the custom nsapp loop
when setting up the nsapp from gstgl.
We also need to recalculate the offset, since otherwise the frame
mapping will be forward two lines in the U and V planes (I420) due
to gst_video_info_align() round up the Y plane to a even number of
lines.
https://bugzilla.gnome.org/show_bug.cgi?id=745054
Make sure we support offset and video alignment when downloading too.
This is currently not used (plane_start is always 0), but it makes
the code correct if we want to use that later.
Provide the right size to GL when uploading. Using maxsize is wrong
since we offset the data point with the memory offset and video
alignement offset.
https://bugzilla.gnome.org/show_bug.cgi?id=744246
When the memory is partial copy, the texture size and videoinfo no
longer make sense. As we cannot guess what the application wants, we
safely copy into a sysmem memory.
https://bugzilla.gnome.org/show_bug.cgi?id=744246
This implements support for GstAllocationParams and memory alignments.
The parameters where simply ignored which could lead to crash on
certain platform when used with libav and no luck.
https://bugzilla.gnome.org/show_bug.cgi?id=744246
When trying to render buffers with meta:GLTextureUpload the glimagesink crashes
with a segmentation fault.
This patch workarounds this crash setting to NULL the method implementation
after free.
https://bugzilla.gnome.org/show_bug.cgi?id=745206
When setting a new window handle, we need to ensure all implementations
will detect the change.
For that we deactivate the context before setting the window handle, then
reactivate the context
https://bugzilla.gnome.org/show_bug.cgi?id=745090
When (re)activating the context, the backing window handle might have changed.
If that happened, destroy the previous surface and create a new one
https://bugzilla.gnome.org/show_bug.cgi?id=745090
Causes the following warning on clang:
gst-dvb-section.c:567:36: error: format specifies type 'unsigned long' but the argument has type 'int' [-Werror,-Wformat]
descriptors_loop_length, end - 4 - data);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
This fix a very slow rendering rate regression that only
happens when using gst-launch, i.e. in the case where
the main thread does not run any NSApp loop.
Git bisect reported it has been introduced by the commit
e10d2417e2:
"move to CGL and CAOpenGLLayer for rendering".
Then the commit 7d46357627:
"gstglwindow_cocoa: fix slow render rate" attempted to fix
the slow rendering rate problem when using gst-launch.
At least for me it does not work. I tried several
combinations, for example to flush CA transactions in the
custom app loop, as mentioned in the doc, but the only solution
that fixes the slow rendering is by reducing the loop latency.
From what I tested, no need to put less than 60ms, even if the
framerate has an interval much lower (16.6ms for 60 fps).
Anytime else, we have no idea how to match up map and unmaps.
We also don't know exactly how the calling code is using us.
Also fixes the case where we're trying to transfer while someone else
is accessing our data pointer or texture resulting in mismatched video
frames.
https://bugzilla.gnome.org/show_bug.cgi?id=744839
One has to use the src_lock anyway to protect the min/max/live so they
can be notified atomically to the src thread to wake it up on changes,
such as property changes. So no point in having a second lock.
Also, the object lock was being held across a call to
GST_ELEMENT_WARNING, guaranteeing a deadlock.
While gst_aggregator_iterate_sinkpads() makes sure that every pad is only
visited once, even when the iterator has to resync, this is not all we have
to do for querying the latency. When the iterator resyncs we actually have
to query all pads for the latency again and forget our previous results. It
might have happened that a pad was removed, which influenced the result of
the latency query.
It was between another function and its helper function before, which was
confusing when reading the code as it had nothing to do with the other
functions.
This lock is not what is commonly known as a "stream lock" in GStremer,
it's not recursive and it's taken from the non-serialized FLUSH_START event.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Move the property from subclasses to adaptivedemux, it allows
selecing the percentage of the measured bitrate to be used when
selecting stream bitrates
Allows to set a bitrate directly instead of measuring it internally
based on the received chunks. The connection-speed was removed from
mssdemux and hlsdemux as it is now in the base class
Don't use private GMutex implementation details to check
whether it has been freed already or not. Just clear mutex
and GCond unconditionally in free function, they are always
inited anyway, and the free function can't be called multiple
times either.
steal_buffer() + unref seems to be a wide-spread idiom
(which perhaps indicates that something is not quite
right with the way aggregator pad works currently).
Where possible, use the _OBJECT variants in order to track better from
which object the debug statement is coming from
Define (and use) GST_CAT_DEFAULT where applicable
Use GST_PTR_FORMAT where applicable
And use the average to go up in resolution, and the last fragment
bitrate to go down.
This allows the demuxer to react rapidly to bitrate loss, and
be conservative for bitrate improvements.
+ Add a construct only property to define the number of fragments
to consider when calculating the average moving bitrate.
https://bugzilla.gnome.org/show_bug.cgi?id=742979
If the src framerate and videoaggreator's output framerate were
different, then we were taking every single buffer that had duration=-1
as it came in regardless of the buffer's start time. This caused the src
to possibly run at a different speed to the output frames.
https://bugzilla.gnome.org/show_bug.cgi?id=744096
In gst_gl_filter_fixate_caps () it can goto done without freeing the memory of
the tmp GstStructure. This makes it go out of scope and leak.
CID #1265765
Allows finer grain decisions about formats and features at each
stage of the pipeline.
Also provide propose_allocation for glupload besed on the supported
methods.
In gst_gl_window_cocoa_draw we used to just call setNeedsDisplay:YES. That was
creating an implicit CA transaction which was getting committed at the next
runloop iteration. Since we don't know how often the main runloop is running,
and when we run it implicitly (from gst_gl_window_cocoa_nsapp_iteration) we only
do so every 200ms, use an explicit CA transaction instead and commit it
immediately. CA transactions nest and debounce automatically so this will never
result in extra work.
Make GstGLMemory hold the texture target (tex_target) the texture it represents
(tex_id) is bound to. Modify gst_gl_memory_wrapped_texture and
gst_gl_download_perform_with_data to take the texture target as an argument.
This change is needed to support wrapping textures created outside libgstgl,
which might be bound to a target other than GL_TEXTURE_2D. For example on OSX
textures coming from VideoToolbox have target GL_TEXTURE_RECTANGLE.
With this change we still keep (and sometimes imply) GL_TEXTURE_2D as the
target of textures created with libgstgl.
API: modify GstGLMemory
API: modify gst_gl_memory_wrapped_texture
API: gst_gl_download_perform_with_data
Don't call glClear && glClearColor at each draw since we're going to draw the
whole viewport anyway. Gets rid of a glFlush triggered by glClear on OSX.
Instead of using the GST_OBJECT_LOCK we should have
a dedicated mutex for the pad as it is also associated
with the mutex on the EVENT_MUTEX on which we wait
in the _chain function of the pad.
The GstAggregatorPad.segment is still protected with the
GST_OBJECT_LOCK.
Remove the gst_aggregator_pad_peak_unlocked method as it does not make
sense anymore with a private lock.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Some members sometimes used atomic access, sometimes where not locked at
all. Instead consistently use a mutex to protect them, also document
that.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Reduce the number of locks simplify code, what is protects
is exposed, but the lock was not.
Also means adding an _unlocked version of gst_aggregator_pad_steal_buffer().
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Whenever a GCond is used, the safest paradigm is to protect
the variable which change is signalled by the GCond with the same
mutex that the GCond depends on.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
This can happen if this is a live pipeline and no source produced any buffer
and sent no caps until an output buffer should've been produced according to the
latency.
This fix is similar in spirit to commit be7034d1 by Sebastian for audiomixer.
In order to use pbo's efficiently, the transfer operation has to
be separated from the use of the downloaded data which requires some
rearchitecturing around glcolorconvert/gldownload and elements
Unset out buffer in clip function when we unref the buffer to be
clipped, otherwise aggregator will continue to use the already-
freed buffer. Fixes crash when buffers without timestamps are
being fed to aggregator. Partly because aggregator ignores the
error flow return.
https://bugzilla.gnome.org/show_bug.cgi?id=743334
READ_UE_ALLOWED was almost exclusively used with min == 0, which doesn't
make much point for unsigned integers.
Add a READ_UE_MAX variant and use that instead. Also replaced two usages
of CHECK_ALLOWED (a,0,something) by CHECK_ALLOWED_MAX (a, something)
Depending on the platform, it was only ever implemented to 1) set a
default surface size, 2) resize based on the video frame or 3) nothing.
Instead, provide a set_preferred_size () that elements/applications
can use to request a certain size which may be ignored for
videooverlay/other cases.
Removes the use of NSOpenGL* variety and functions. Any Cocoa
specific functions that took/returned a NSOpenGL* object now
take/return the CGL equivalents.
Add more power to the chunk_received function (renamed to data_received)
and also to the fragment_finish function.
The data_received function must parse/decrypt the data if necessary and
also push it using the new push_buffer function that is exposed now. The
default implementation gets data from the stream adapter (all available)
and pushes it.
The fragment_finish function must also advance the fragment. The default
implementation only advances the fragment.
This allows the subsegment handling in dashdemux to continuously download
the same file from the server instead of stopping at every subsegment
boundary and starting a new request
If we say it is the first segment after a new period it will resync
the segment.start value and all buffers will be late for the new period
we are trying to play. Otherwise we want to keep the segment.start with
the previous value to allow the running time to smoothly increase
Check if there is a next fragment before advancing to avoid causing
a bitrate switch (and maybe exposing new pads) only to push EOS.
This causes playback to stop with an error instead of properly
finishing with EOS message.
The subsegment boundary return tells the adaptivedemux that it can
try to switch to another representation as the stream is at a suitable
position for starting from another bitrate.
In order to get some subsegment information, subclasses might want
to download only the headers to have enough data (the index)
to decide where to start downloading from the subsegment.
This allows the subclasses to know if the chunks that are downloaded are
part of the header or of the index and will parse the parts that are
of their interest.
Ensure that we do not trust the bitstream when filling a table
with a fixed max size.
Additionally, the code was not quite matching what the spec says:
- a value of 3 broke from the loop before adding an entry
- an unhandled value did not add an entry
The reference algorithm does these things differently (7.3.3.1
in ITU-T Rec. H.264 (05/2003)).
This plays (apparently correctly) the original repro file, with
no stack smashing.
Based on a patch and bug report by André Draszik <git@andred.net>
The hack causes deadlocks and other interesting problems and it really
can only be fixed properly inside GLib. We will include a patch for
GLib in our builds for now that handles this, and hopefully at some
point GLib will also merge a proper solution.
A proper solution would first require to refactor the polling in
GMainContext to only provide a single fd, e.g. via epoll/kqueue
or a thread like the one added by our patch. Then this single
fd could be retrieved from the GMainContext and directly integrated
into a NSRunLoop.
https://bugzilla.gnome.org/show_bug.cgi?id=741450https://bugzilla.gnome.org/show_bug.cgi?id=704374
Soon after setting two variables to 1, the code checks if their values are
different from each other. This would never be true. Removing this.
CID 1226443
No need to use an iterator for this which creates a temporary
structure every time and also involves taking and releasing the
object lock many times in the course of iterating. Not to mention
all that GList handling in gst_aggregator_iterate_sinkpads().
The minimum latency is the latency we have to wait at least
to guarantee that all upstreams have produced data. The maximum
latency has no meaning like that and shouldn't be used for waiting.
When iterating sink pads to collect some data, we should take the stream lock so
we don't get stale data and possibly deadlock because of that. This fixes
a definitive deadlock in _wait_and_check() that manifests with high max
latencies in a live pipeline, and fixes other possible race conditions.
Segment start needs only to be updated when starting the streams
or after a seek, doing it during bitrate changes will cause the
running time to go discontinuous (jump back to a previous ts)
and QOS will drop buffers
This simplifies the code and also makes sure that we don't forget to check all
conditions for waiting.
Also fix a potential deadlock caused by not checking if we're actually still
running before starting to wait.
Actually we should always recalculate buffer size since our buffer size
even when not-padded is smaller for many sub-sampled formats. This is
because we don't add padding between the planes.
https://bugzilla.gnome.org/show_bug.cgi?id=740900
Problem was that if buffer was mapped READWRITE (state of buffers from
libav right now), mapping it READ/GL will not upload. This is because the
flag is only set when the buffer is unmapped. We can fix this by setting
the flags in map. This result in already mapped buffer that get mapped
to be read in GL will be uploaded. The problem is that if the write
mapper makes modification afterward, the modification will never get
uploaded.
https://bugzilla.gnome.org/show_bug.cgi?id=740900
When this is TRUE, we really have to produce output. This happens
in live mixing mode when we have to output something for the current
time, no matter if we have enough input or not.
This removes the uses of GAsyncQueue and replaces it with explicit
GMutex, GCond and wakeup count which is used for the non-live case.
For live pipelines, the aggregator waits on the clock until either
data arrives on all sink pads or the expected output buffer time
arrives plus the timeout/latency at which time, the subclass
produces a buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=741146
To avoid race conditions with gst_task_stop(); gst_task_join() with
another thread doing gst_task_pause(), the joining thread would be
waiting for the task to stop but it would never happen. So just
use gst_task_stop() everywhere to prevent more mutexes