It's true that we shouldn't consider errors fatal immediately, but if we
always ignore them we will loop infinitely on live streams with segments
that can't be downloaded at all.
Add a vfunc that is called by glfilter before it sets
caps features and intersects with the peer caps, and
move removing the size from caps into its default
implementation. Allows sub-classes to do more
sophisticated management of the size fields in case they
don't support arbitrary resizing or have distinct
preferences.
Add preserve_update_caps_result boolean on the class to allow
sub-classes to disable videoaggregator removing sizes and framerate
from the update_caps() return result.
A return value of GST_FLOW_OK with a NULL buffer from get_output_buffer()
means the sub-class doesn't want to produce an output buffer, so
skip it.
If gst_videoaggregator_do_aggregate() generates an error, make sure
to propagate it - don't just ignore and discard the error by
over-writing it with the gst_pad_push() result.
Previously when compiling GstGL with both GL and GLES2,
GL_RGBA8 was picked from GL/gl.h. But a clash may happen at
runtime when one is selecting GLES2.
gst_gl_internal_format_rgba allows to check at runtime
if it should use GL_RGBA or GL_RGBA8.
Simple implementation split from GstGLWindowWayland
Can now have multiple glimagesink elements all displaying output
linked via GL or otherwise (barring GL platform limitations).
The intel driver is racy and can crash setting up the two glimagesink contexts.
e.g.
videotestsrc ! tee name=t ! queue ! glupload ! glimagesinkelement
t. ! queue ! gleffects_blur ! glimagesinkelement
videotestsrc ! glupload ! glfiltercube ! tee name=t ! queue ! glimagesinkelement
t. ! queue ! gleffects_blur ! glimagesinkelement
Otherwise we could end up being mistaken for the diference between a
gl3 and a gl2 context resulting in a failure getting the list of
extensions from the wrapped context due to the difference between
glGetString and glGetStringi for the GL_EXTENSIONS token.
https://bugzilla.gnome.org/show_bug.cgi?id=749728
When called from gst_gl_window_win32_close(), internal window
could not exist, and if it does it's going to be destroyed just
after that anyway. Also it causes window_proc() to be called
and crash because it gets a NULL context.
When called from gst_gl_window_win32_set_window_handle() we are
going to set another parent anyway, and it's probably better to
reparent directly instead of passing by a NULL parent which could
cause the internal window to popup briefly.
https://bugzilla.gnome.org/show_bug.cgi?id=749601
gst_gl_context_finalize() is calling gst_gl_window_win32_quit()
which was posting a message. But then window_proc takes window's
context and get a NULL.
Now that we've got a GMainLoop we can do like other backends and
simply call g_main_loop_quit().
This also remove duplicated code to release the parent window and
potential crash there because parent_proc could be NULL if we never
created the internal window. That could happen for example if setting
state to READY then setting a window_handle, and go back to NULL state.
https://bugzilla.gnome.org/show_bug.cgi?id=749601
gst_gl_window_win32_send_message_async() could be called before the
internal window is created so we cannot use PostMessage there.
x11 and wayland backends both create a custom GSource for this,
so there is no reason to not do that for win32.
https://bugzilla.gnome.org/show_bug.cgi?id=749601
Otherwise it could stay client side without being submitted to the GL
server resulting in another context waiting on a Fence that will never
become signalled causing a deadlock.
Make the passthrough check contingent on only the fields we
can modify being unchanged, and pre-compute it when caps
change instead of checking on each buffer. Makes the passthrough
more lenient if consumers are lax about making input and output
caps complete.
The EOS and EOB nals have the size 2 which is the size of
nal unit header itself. The gst_h265_parser_identify_nalu()
is not required to scan start code again in this case.
In other cases, for a valid nalunit the minimum required size
is 3 bytes (2 byte header and at least 1 byte RBSP payload)
Skip the byte alignment bits as per the logic of byte_alignment()
provided in hevc specification. This will fix the calculation of
slice header size.
https://bugzilla.gnome.org/show_bug.cgi?id=747613
Even for "live" streams we are not live in the GStreamer meaning of the word.
We don't produce buffers that are timestamped based on their "capture time"
and our clock, but just based on whatever timestamps the stream might contain.
Also even if we wanted to claim to be live, that wouldn't work well as we
would have to return GST_STATE_CHANGE_NO_PREROLL when going from READY to
PAUSED, which we can't. We first need data to know if we are "live" or not.
It will deadlocks as we will then join() the update task from itself. Instead
just post an actual error message on the bus and only stop the update task.
The application is then responsible for shutting down the element, and thus
all the other tasks and everything, based on the error message it gets.
This would've also triggered if for some reason the segment was updated
in such a way that PTS went backwards, but the running time increased. Like
what happens when non-flushing seeks are done.
We're doing a proper buffer-from-the-past check a few lines below based on the
running time, which is the only time we should care about here.
And keep on querying upstream until we get a reply.
Also, the _get_latency_unlocked() method required being calld
with a private lock, so removed the _unlocked() variant from the API.
And it now returns GST_CLOCK_TIME_NONE when the element is not live as
we think that 0 upstream latency is possible.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
It might return OK from subclasses and it could cause a bitrate
renegotiation. For DASH and MSS that is ok as they won't expose
new pads as part of this but it can cause issues for HLS as
it will expose new pads, leading to pads that will only have EOS
that cause decodebin to fail
https://bugzilla.gnome.org/show_bug.cgi?id=745905
Show the DispmanX window only if there's no shared external GL context
set up. When a window is required by the context a transparent
DispmanX element is created and later on made visible by the ::show
method.
https://bugzilla.gnome.org/show_bug.cgi?id=746632
In some upload implementations the out buffer has more than one references,
turning the buffer not writable, so it won't be possible to modify its
meta-data.
This patch moves the meta-data copy before increasing the reference of the out
buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=746173
Asks the subclass for a potential time offset to apply to each
separate stream, in dash streams can have "presentation time offsets",
which can be different for each stream.
https://bugzilla.gnome.org/show_bug.cgi?id=745455
Chaining a downstream pool would lead to two owner of the same
pool. In dynamic pipeline, if one owner is removed from the pipeline
the pool will be stopped, and the rest of the pipeline will fail
since the pool will now be flushing. Also fix proposed pool caching,
filter->pool was never set, never unrefed.
https://bugzilla.gnome.org/show_bug.cgi?id=745705
In case the original caps were missing some optional fields like
interlace-mode. We assume default values for those everywhere,
but they can still cause negotiation to fail if a downstream element
expects the field to be there and at a specific value.
Otherwise the pipeline stalls when running
more than one glimagesink with gst-launch.
Also only register the custom nsapp loop
when setting up the nsapp from gstgl.
We also need to recalculate the offset, since otherwise the frame
mapping will be forward two lines in the U and V planes (I420) due
to gst_video_info_align() round up the Y plane to a even number of
lines.
https://bugzilla.gnome.org/show_bug.cgi?id=745054
Make sure we support offset and video alignment when downloading too.
This is currently not used (plane_start is always 0), but it makes
the code correct if we want to use that later.
Provide the right size to GL when uploading. Using maxsize is wrong
since we offset the data point with the memory offset and video
alignement offset.
https://bugzilla.gnome.org/show_bug.cgi?id=744246
When the memory is partial copy, the texture size and videoinfo no
longer make sense. As we cannot guess what the application wants, we
safely copy into a sysmem memory.
https://bugzilla.gnome.org/show_bug.cgi?id=744246
This implements support for GstAllocationParams and memory alignments.
The parameters where simply ignored which could lead to crash on
certain platform when used with libav and no luck.
https://bugzilla.gnome.org/show_bug.cgi?id=744246
When trying to render buffers with meta:GLTextureUpload the glimagesink crashes
with a segmentation fault.
This patch workarounds this crash setting to NULL the method implementation
after free.
https://bugzilla.gnome.org/show_bug.cgi?id=745206
When setting a new window handle, we need to ensure all implementations
will detect the change.
For that we deactivate the context before setting the window handle, then
reactivate the context
https://bugzilla.gnome.org/show_bug.cgi?id=745090
When (re)activating the context, the backing window handle might have changed.
If that happened, destroy the previous surface and create a new one
https://bugzilla.gnome.org/show_bug.cgi?id=745090
Causes the following warning on clang:
gst-dvb-section.c:567:36: error: format specifies type 'unsigned long' but the argument has type 'int' [-Werror,-Wformat]
descriptors_loop_length, end - 4 - data);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
This fix a very slow rendering rate regression that only
happens when using gst-launch, i.e. in the case where
the main thread does not run any NSApp loop.
Git bisect reported it has been introduced by the commit
e10d2417e2:
"move to CGL and CAOpenGLLayer for rendering".
Then the commit 7d46357627:
"gstglwindow_cocoa: fix slow render rate" attempted to fix
the slow rendering rate problem when using gst-launch.
At least for me it does not work. I tried several
combinations, for example to flush CA transactions in the
custom app loop, as mentioned in the doc, but the only solution
that fixes the slow rendering is by reducing the loop latency.
From what I tested, no need to put less than 60ms, even if the
framerate has an interval much lower (16.6ms for 60 fps).
Anytime else, we have no idea how to match up map and unmaps.
We also don't know exactly how the calling code is using us.
Also fixes the case where we're trying to transfer while someone else
is accessing our data pointer or texture resulting in mismatched video
frames.
https://bugzilla.gnome.org/show_bug.cgi?id=744839
One has to use the src_lock anyway to protect the min/max/live so they
can be notified atomically to the src thread to wake it up on changes,
such as property changes. So no point in having a second lock.
Also, the object lock was being held across a call to
GST_ELEMENT_WARNING, guaranteeing a deadlock.
While gst_aggregator_iterate_sinkpads() makes sure that every pad is only
visited once, even when the iterator has to resync, this is not all we have
to do for querying the latency. When the iterator resyncs we actually have
to query all pads for the latency again and forget our previous results. It
might have happened that a pad was removed, which influenced the result of
the latency query.
It was between another function and its helper function before, which was
confusing when reading the code as it had nothing to do with the other
functions.
This lock is not what is commonly known as a "stream lock" in GStremer,
it's not recursive and it's taken from the non-serialized FLUSH_START event.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Move the property from subclasses to adaptivedemux, it allows
selecing the percentage of the measured bitrate to be used when
selecting stream bitrates
Allows to set a bitrate directly instead of measuring it internally
based on the received chunks. The connection-speed was removed from
mssdemux and hlsdemux as it is now in the base class
Don't use private GMutex implementation details to check
whether it has been freed already or not. Just clear mutex
and GCond unconditionally in free function, they are always
inited anyway, and the free function can't be called multiple
times either.
steal_buffer() + unref seems to be a wide-spread idiom
(which perhaps indicates that something is not quite
right with the way aggregator pad works currently).
Where possible, use the _OBJECT variants in order to track better from
which object the debug statement is coming from
Define (and use) GST_CAT_DEFAULT where applicable
Use GST_PTR_FORMAT where applicable
And use the average to go up in resolution, and the last fragment
bitrate to go down.
This allows the demuxer to react rapidly to bitrate loss, and
be conservative for bitrate improvements.
+ Add a construct only property to define the number of fragments
to consider when calculating the average moving bitrate.
https://bugzilla.gnome.org/show_bug.cgi?id=742979
If the src framerate and videoaggreator's output framerate were
different, then we were taking every single buffer that had duration=-1
as it came in regardless of the buffer's start time. This caused the src
to possibly run at a different speed to the output frames.
https://bugzilla.gnome.org/show_bug.cgi?id=744096
In gst_gl_filter_fixate_caps () it can goto done without freeing the memory of
the tmp GstStructure. This makes it go out of scope and leak.
CID #1265765
Allows finer grain decisions about formats and features at each
stage of the pipeline.
Also provide propose_allocation for glupload besed on the supported
methods.
In gst_gl_window_cocoa_draw we used to just call setNeedsDisplay:YES. That was
creating an implicit CA transaction which was getting committed at the next
runloop iteration. Since we don't know how often the main runloop is running,
and when we run it implicitly (from gst_gl_window_cocoa_nsapp_iteration) we only
do so every 200ms, use an explicit CA transaction instead and commit it
immediately. CA transactions nest and debounce automatically so this will never
result in extra work.
Make GstGLMemory hold the texture target (tex_target) the texture it represents
(tex_id) is bound to. Modify gst_gl_memory_wrapped_texture and
gst_gl_download_perform_with_data to take the texture target as an argument.
This change is needed to support wrapping textures created outside libgstgl,
which might be bound to a target other than GL_TEXTURE_2D. For example on OSX
textures coming from VideoToolbox have target GL_TEXTURE_RECTANGLE.
With this change we still keep (and sometimes imply) GL_TEXTURE_2D as the
target of textures created with libgstgl.
API: modify GstGLMemory
API: modify gst_gl_memory_wrapped_texture
API: gst_gl_download_perform_with_data
Don't call glClear && glClearColor at each draw since we're going to draw the
whole viewport anyway. Gets rid of a glFlush triggered by glClear on OSX.
Instead of using the GST_OBJECT_LOCK we should have
a dedicated mutex for the pad as it is also associated
with the mutex on the EVENT_MUTEX on which we wait
in the _chain function of the pad.
The GstAggregatorPad.segment is still protected with the
GST_OBJECT_LOCK.
Remove the gst_aggregator_pad_peak_unlocked method as it does not make
sense anymore with a private lock.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Some members sometimes used atomic access, sometimes where not locked at
all. Instead consistently use a mutex to protect them, also document
that.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Reduce the number of locks simplify code, what is protects
is exposed, but the lock was not.
Also means adding an _unlocked version of gst_aggregator_pad_steal_buffer().
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Whenever a GCond is used, the safest paradigm is to protect
the variable which change is signalled by the GCond with the same
mutex that the GCond depends on.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
This can happen if this is a live pipeline and no source produced any buffer
and sent no caps until an output buffer should've been produced according to the
latency.
This fix is similar in spirit to commit be7034d1 by Sebastian for audiomixer.
In order to use pbo's efficiently, the transfer operation has to
be separated from the use of the downloaded data which requires some
rearchitecturing around glcolorconvert/gldownload and elements
Unset out buffer in clip function when we unref the buffer to be
clipped, otherwise aggregator will continue to use the already-
freed buffer. Fixes crash when buffers without timestamps are
being fed to aggregator. Partly because aggregator ignores the
error flow return.
https://bugzilla.gnome.org/show_bug.cgi?id=743334
READ_UE_ALLOWED was almost exclusively used with min == 0, which doesn't
make much point for unsigned integers.
Add a READ_UE_MAX variant and use that instead. Also replaced two usages
of CHECK_ALLOWED (a,0,something) by CHECK_ALLOWED_MAX (a, something)
Depending on the platform, it was only ever implemented to 1) set a
default surface size, 2) resize based on the video frame or 3) nothing.
Instead, provide a set_preferred_size () that elements/applications
can use to request a certain size which may be ignored for
videooverlay/other cases.
Removes the use of NSOpenGL* variety and functions. Any Cocoa
specific functions that took/returned a NSOpenGL* object now
take/return the CGL equivalents.
Add more power to the chunk_received function (renamed to data_received)
and also to the fragment_finish function.
The data_received function must parse/decrypt the data if necessary and
also push it using the new push_buffer function that is exposed now. The
default implementation gets data from the stream adapter (all available)
and pushes it.
The fragment_finish function must also advance the fragment. The default
implementation only advances the fragment.
This allows the subsegment handling in dashdemux to continuously download
the same file from the server instead of stopping at every subsegment
boundary and starting a new request
If we say it is the first segment after a new period it will resync
the segment.start value and all buffers will be late for the new period
we are trying to play. Otherwise we want to keep the segment.start with
the previous value to allow the running time to smoothly increase
Check if there is a next fragment before advancing to avoid causing
a bitrate switch (and maybe exposing new pads) only to push EOS.
This causes playback to stop with an error instead of properly
finishing with EOS message.
The subsegment boundary return tells the adaptivedemux that it can
try to switch to another representation as the stream is at a suitable
position for starting from another bitrate.
In order to get some subsegment information, subclasses might want
to download only the headers to have enough data (the index)
to decide where to start downloading from the subsegment.
This allows the subclasses to know if the chunks that are downloaded are
part of the header or of the index and will parse the parts that are
of their interest.
Ensure that we do not trust the bitstream when filling a table
with a fixed max size.
Additionally, the code was not quite matching what the spec says:
- a value of 3 broke from the loop before adding an entry
- an unhandled value did not add an entry
The reference algorithm does these things differently (7.3.3.1
in ITU-T Rec. H.264 (05/2003)).
This plays (apparently correctly) the original repro file, with
no stack smashing.
Based on a patch and bug report by André Draszik <git@andred.net>
The hack causes deadlocks and other interesting problems and it really
can only be fixed properly inside GLib. We will include a patch for
GLib in our builds for now that handles this, and hopefully at some
point GLib will also merge a proper solution.
A proper solution would first require to refactor the polling in
GMainContext to only provide a single fd, e.g. via epoll/kqueue
or a thread like the one added by our patch. Then this single
fd could be retrieved from the GMainContext and directly integrated
into a NSRunLoop.
https://bugzilla.gnome.org/show_bug.cgi?id=741450https://bugzilla.gnome.org/show_bug.cgi?id=704374
Soon after setting two variables to 1, the code checks if their values are
different from each other. This would never be true. Removing this.
CID 1226443
No need to use an iterator for this which creates a temporary
structure every time and also involves taking and releasing the
object lock many times in the course of iterating. Not to mention
all that GList handling in gst_aggregator_iterate_sinkpads().
The minimum latency is the latency we have to wait at least
to guarantee that all upstreams have produced data. The maximum
latency has no meaning like that and shouldn't be used for waiting.
When iterating sink pads to collect some data, we should take the stream lock so
we don't get stale data and possibly deadlock because of that. This fixes
a definitive deadlock in _wait_and_check() that manifests with high max
latencies in a live pipeline, and fixes other possible race conditions.
Segment start needs only to be updated when starting the streams
or after a seek, doing it during bitrate changes will cause the
running time to go discontinuous (jump back to a previous ts)
and QOS will drop buffers
This simplifies the code and also makes sure that we don't forget to check all
conditions for waiting.
Also fix a potential deadlock caused by not checking if we're actually still
running before starting to wait.
Actually we should always recalculate buffer size since our buffer size
even when not-padded is smaller for many sub-sampled formats. This is
because we don't add padding between the planes.
https://bugzilla.gnome.org/show_bug.cgi?id=740900
Problem was that if buffer was mapped READWRITE (state of buffers from
libav right now), mapping it READ/GL will not upload. This is because the
flag is only set when the buffer is unmapped. We can fix this by setting
the flags in map. This result in already mapped buffer that get mapped
to be read in GL will be uploaded. The problem is that if the write
mapper makes modification afterward, the modification will never get
uploaded.
https://bugzilla.gnome.org/show_bug.cgi?id=740900
When this is TRUE, we really have to produce output. This happens
in live mixing mode when we have to output something for the current
time, no matter if we have enough input or not.
This removes the uses of GAsyncQueue and replaces it with explicit
GMutex, GCond and wakeup count which is used for the non-live case.
For live pipelines, the aggregator waits on the clock until either
data arrives on all sink pads or the expected output buffer time
arrives plus the timeout/latency at which time, the subclass
produces a buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=741146
To avoid race conditions with gst_task_stop(); gst_task_join() with
another thread doing gst_task_pause(), the joining thread would be
waiting for the task to stop but it would never happen. So just
use gst_task_stop() everywhere to prevent more mutexes
Check if the stream is live before checking if it is EOS as a live
stream might be considered EOS when it just needs to wait for a manifest
update to proceed with the next fragments
During flushing seeks the flushing flow return will propagate up to the
source element and all pads are going to have the flushing flag set.
So before restarting also remove that flag together with the EOS one.
We don't do that when pushing the flush stop event because our event
handler for the proxypad will drop all events.
A context can create a GLsync object that can be waited on in order
to ensure that GL resources created in one context are able to be
used in another shared context without any chance of reading invalid
data.
This meta would be placed on buffers that are known to cross from
one context to another. The receiving element would then wait
on the sync object to ensure that the data to be used is complete.
gst_video_info_set_format() will reset the complete video-info, but
we want to keep values like the PAR, colorimetry and chroma site.
Otherwise we risk setting different values on the srcpad caps than
what is actually inside the buffers.
Otherwise we might negotiate from the sinkpad streaming threads at
the same time as on the srcpad streaming thread, and then all kinds
of crazy bugs happen that don't make any sense at all.
This gives more flexibility to the subclasses and permits to remove the
GstVideoAggregatorClass->disable_frame_conversion ugly API.
WARNING: This breaks the API as it removes the disable_frame_conversion
field
API:
+ GstVideoAggregatorClass->find_best_format
+ GstVideoAggregatorPadClass->set_format
+ GstVideoAggregatorPadClass->prepare_frame
+ GstVideoAggregatorPadClass->clean_frame
- GstVideoAggregatorClass->disable_frame_conversion
https://bugzilla.gnome.org/show_bug.cgi?id=740768
With the current code, we will end up setting the preferred downstream
format as the srcpad format, and it might not be accepted by the next
sinkpad to be added. We should instead let the next sinkpad reconfigure
everything.
Floating point numbers are written differently in different
locales, e.g. in many countries 1/2 = 0,5 instead of 0.5, and
strtod will not be able to parse "0.5" correctly in such a
locale.
Otherwise the caps of the pad might change while the subclass still works with
a buffer of the old caps, assuming the the current pad caps apply to that
buffer. Which then leads to crashes and other nice effects.
https://bugzilla.gnome.org/show_bug.cgi?id=740376
Otherwise interesting things will happen in Cocoa applications, like
infinite event loops that block the NSApplication loop forever.
This was only needed for GNUStep and thus can safely be removed now.
Until gcc and GNUStep properly support Objective-C blocks and other
"new" features of Objective-C we can't properly support them without
making the code much more ugly.
https://bugzilla.gnome.org/show_bug.cgi?id=739152
It's architecture dependent and should not be placed into the include
directory as the assumption is that all those headers are architecture
independent.
https://bugzilla.gnome.org/show_bug.cgi?id=739767
Otherwise when resizing the window you will also get messages like:
class NSConcreteMapTable autoreleased with no pool in place - just leaking
class NSConcreteValue autoreleased with no pool in place - just leaking
class NSConcreteValue autoreleased with no pool in place - just leaking
class __NSCFDictionary autoreleased with no pool in place - just leaking
Need to set the ':' as the reshape method now takes one parameter.
For the story, the GstGLNSView was previously inheriting from
NSOpenGLView which has a reshape function without any parameter.
Now the GstGLNSView inherits from NSView and we re-use the reshape
function manually.
Use the reshape function after being defined. The other way
would have been to declare the reshape function in the header.
gstglwindow_cocoa.m: In function '-[GstGLNSView drawRect:]':
gstglwindow_cocoa.m:555: warning: 'GstGLNSView' may not respond to '-reshape'
gstglwindow_cocoa.m:555: warning: (Messages without a matching method signature
gstglwindow_cocoa.m:555: warning: will be assumed to return 'id' and accept
gstglwindow_cocoa.m:555: warning: '...' as arguments.)
GTK-Doc uses a special syntax for code documentation. A multiline comment that
starts with an additional '*' marks a documentation block that will be processed
by the GTK-Doc tools. So GTK-Doc is confused if a comment block starts with that
additional '*' but isn't meant to be processed. Removing this additional '*'.
https://bugzilla.gnome.org/show_bug.cgi?id=739444
Both _parse_atsc_mgt() and _parse_atsc_vct () change the value of the variable
data just before returning. The new value is never used since data is a pointer
declared at the beginning of the function and going out of scope just after the
new value is stored.
https://bugzilla.gnome.org/show_bug.cgi?id=739404
Identify SVC NAL units and tag them as such. This is necessary for
gst_h264_parser_parse_slice_hdr() to fail gracefully, if the user
did not perform the check himself.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix copy-paste error in gst_h264_sps_mvc_copy() where num_anchor_refs_l0
and num_non_anchor_refs_l0 were incorrectly initialized from list1.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Always set a default NALU extension type, and override it
when we find a supported extension, to avoid having it unset/random
for unsupported NALU extensions
This parses the frame_packing_arragement() payload in SEI message.
This information can be used by decoders to appropriately rearrange the
samples which belong to Stereoscopic and Multiview High profiles.
https://bugzilla.gnome.org/show_bug.cgi?id=685215
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
gstglwindow_cocoa.m: In function '-[GstGLNSView drawRect:]':
gstglwindow_cocoa.m:555: warning: 'GstGLNSView' may not respond to '-reshape'
gstglwindow_cocoa.m:555: warning: (Messages without a matching method signature
gstglwindow_cocoa.m:555: warning: will be assumed to return 'id' and accept
gstglwindow_cocoa.m:555: warning: '...' as arguments.)
Audiomixer blocksize, cant be 0, hence adjusting the minimum value to 1
timeout value of aggregator is defined with MAX of MAXINT64,
but it cannot cross G_MAXLONG * GST_SECOND - 1
Hence changed the max value of the same
https://bugzilla.gnome.org/show_bug.cgi?id=738845
In simple profile, level set to 0 or 2 indicate low and medium level
respectively. In main profile, level set to 0, 2 or 4 indicate low,
medium and high level respectively.
Level values are defined in Annex J.1.2 of the SMPTE 421M.
https://bugzilla.gnome.org/show_bug.cgi?id=738230
Using NSApp directly seems to confuse something, as the compiler
was expecting an id<NSFileManagerDelegate>. Switched to using
[NSApplication sharedApplication], and specified the delegate
protocol on the window class as well.
https://bugzilla.gnome.org/show_bug.cgi?id=738740
Determines the amount of time that a pad will wait for a buffer before
being marked unresponsive.
Network sources may fail to produce buffers for an extended period of time,
currently causing the pipeline to stall possibly indefinitely, waiting for
these buffers to appear.
Subclasses should render unresponsive pads with either silence (audio), the
last (video) frame or what makes the most sense in the given context.
- The shader was outputing the wrong values compared with raw
videotestsrc.
- deal with the texture edge properly.
- properly sample the 2x1 rectangle for the u and v values
- don't double sample the y value
The previous implementation kept accumulating GSources,
slowing down the iteration and leaking memory.
Instead of trying to fix the main context flushing, replace
it with a GAsyncQueue which is simple to flush and has
less overhead.
https://bugzilla.gnome.org/show_bug.cgi?id=736782
The aggregator.segment is not to be initialized by the subclasses but
by the aggregator itself. Moreover, initializing it on start would make
us loose the information coming from the initial seek.
Without a lock that is taken in FLUSH_START we had a rare race where we
end up aggregating a buffer that was before the whole FLUSH_START/STOP
dance. That could lead to very wrong behaviour in subclasses.
We should be able to always keep the VIDEO_AGGREGATOR_LOCK while
negotiating caps, this patch introduce that change.
That also implies that we do not need the SETCAPS_LOCK anymore because
now VIDEO_AGGREGATOR_LOCK guarantees that setcaps is not called from
several threads and the gst_aggregator_set_caps method is now
protected.
https://bugzilla.gnome.org/show_bug.cgi?id=735042
Avoiding to be in an inconsistent state where we do not have
actual negotiate caps set as srccaps and leading to point where we
try to unref ->srccaps when they have already been set to NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=735042
This is about converting the format, not about converting any widths and
heights. Subclasses are expected to handler different resolutions themselves,
like the videomixers already do properly.
The visible rect and bounds might be the same as before, but Cocoa
might've changed our viewport without us nothing. This happens if
you hide the view and show it again.
This is only for non-Cocoa apps but previously caused a 2 second
waiting during startup for Cocoa apps. This is unacceptable.
Instead we now check a bit more extensive if something actually
runs on the GLib default main context, and if not don't even
bother waiting for something to happen from there.
Otherwise we could pass on a RGBA formatted buffer and downstream would
misinterpret that as some other video format.
Fixes pipelines of the form
gleffects ! tee ! xvimagesink
Allows callers to properly reference count the buffers used for
rendering.
Fixes a redraw race in glimagesink where the previous buffer
(the one used for redraw operations) is freed as soon as the next
buffer is uploaded.
1. glimagesink uploads in _prepare() to texture n
1.1 glupload holds buffer n
2. glimagesink _render()s texture n
3. glimagesink uploads texture n+1
3.1 glupload free previous buffer which deletes texture n
3.2 glupload holds buffer n+1
4. glwindow resize/expose
5. glimagesink redraws with texture n
The race is that the buffer n (the one used for redrawing) is freed as soon as
the buffer n+1 arrives. There could be any amount of time and number of
redraws between this event and when buffer n+1 is actually rendered and thus
replaces buffer n as the redraw source.
https://bugzilla.gnome.org/show_bug.cgi?id=736740
sequence-layer is serialized in little-endian byte order except for
STRUCT_C which is serialized in big-endian byte order.
But since STRUCT_A and STRUCT_B fields are defined as unsigned int msb
first, we have to pass them as big-endian to their parsing function. So
we basically use temporary buffers to convert them in big-endian.
See SMPTE 421M Annex J and L.
https://bugzilla.gnome.org/show_bug.cgi?id=736871
This thread dispatches navigation events. It is needed to avoid deadlocks
between window backend threads that emit navigation events (e.g. X11/GMainLoop
thread) and consumers of navigation events such as glimagesink, see
https://bugzilla.gnome.org/show_bug.cgi?id=733661
GstGlWindow_x11 thread is changed to invoke the navigation thread for navigation
dispatching, instead of emiting the event itself. Othe backends beside X11 do
not dispatch navigation events yet, but should use this thread when dispatching
these events in the future.
The navigation thread is currently part of GstGLWindow and not implemented in
separate subclasses / backends. This will be needed in the future.
gst_gl_window_x11_get_surface_dimensions is also changed to use a cached value
of the window's width, height. These values are now retrieved in the X11
thread, function gst_gl_window_x11_handle_event. This change is needed because
otherwise the XGetWindowAttributes gets called from the navigation thread,
leading to xlib aborting due to multithreaded access (if XInitThreads is not
called before, as is the case for gst-launch)
EGL_CONTEXT_FLAGS_KHR and EGL_CONTEXT_OPENGL_DEBUG_BIT_KHR
don't exist in the Android NDK. Wrap their usage in an #ifdef
EGL_KHR_create_context to fix the build there.
The text for EGL_KHR_create_context added the possiblity for ES
contexts to ask for a debug context however that has not been
fully realized by all implementations. Fallback to a non-debug
context when the implementation errors.
Along with the required mandatory dependent events.
Some elements need to perform an allocation query inside
::negotiated_caps(). Without the caps event being sent prior,
downstream elements will be unable to answer and will return
an error.
https://bugzilla.gnome.org/show_bug.cgi?id=732662
If window is resized, GstStructure pointer values have to be rescaled to
original geometry. A get_surface_dimensions GLWindow class method is added for
this purpose and used in the navigation send_event function.
https://bugzilla.gnome.org/show_bug.cgi?id=703486
Otherwise pic timing structure can have invalid cpb_removal_delay,
dpb_output_delay or pic_struct_present_flag which are blindly retrieved
in h264parse.
https://bugzilla.gnome.org/show_bug.cgi?id=734124
Certain elements expect that there be a certain number of lines
that they can write into. e.g. for odd heights, I420, YV12, NV12,
NV21 (and others) Y lines are expected to have exactly twice the
number of U/UV lines.
https://bugzilla.gnome.org/show_bug.cgi?id=733717
The vlc table members cbits, cword and values were assigned in the wrong
order, causing the mpeg4 parser to fail when handling sprite
trajectories.
https://bugzilla.gnome.org/show_bug.cgi?id=733322
Previously selector_bytes and private_data_bytes were sometimes allocated and
free using the normal allocator and sometimes using the slice allocator.
Additionally prefer g_strdup() to g_memdup() for strings.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=732789
Allows use to add API in the future without breaking ABI. We broke the API/ABI
once between 1.2 and 1.4, let's try to avoid this in the future even if this
is an unstable library.
https://bugzilla.gnome.org/show_bug.cgi?id=730914
Fix documentation for GstH264NalUnit. The @ref_idc part was totally
unbalanced. Also add a note about @offset and @size fields to remind
that this is relative to the start of the NAL unit, thus including
the header bytes.
An end_of_seq() [EOSEQ] or end_of_stream() [EOS] NAL unit is really
one byte long because this shall include the NalHeaderBytes (1) too.
The NALU.offset starts from the first byte of the header.
This is the proper fix to commit d37f842. In practice, this fixes
parsing of FRExt1_Panasonic_D and FRExt2_Panasonic_C, that include
additional frames after an EOSEQ.
https://bugzilla.gnome.org/show_bug.cgi?id=732553
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The gst_h264_parse_pps() function dynamically allocates the slice
group ids map array, so that needs to be cleared before parsing a
new PPS NAL unit again, or when it is no longer needed.
Likewise, a clean copy to the internal NAL parser state needs to be
performed so that to avoid a double-free corruption.
https://bugzilla.gnome.org/show_bug.cgi?id=707282
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The recovery point SEI message helps a decoder in determining if the
decoding process would produce acceptable pictures for display after
the decoder initiates random access or after the encoder indicates
a broken link in the coded video sequence.
This is not used in the h264parse element, but it could help debugging.
https://bugzilla.gnome.org/show_bug.cgi?id=723380
Add nal_reader_skip_long() helper function to allow an arbitrary number
of bits to be skipped. The former nal_reader_skip() function is too
limited to the actual cache size.
Use this new function to simplify gst_h264_parser_parse_sei_message()
default case, that skips unsupported payloads.
v2: made args consistent from header to source file.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Use the first _gst_reserved[] slot to hold the built-in range decoder
private data. The first slot was formerly the buffer size, which was
then promoted to semi-public namespace when it got integrated as git
commit 2940ac6.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
It was previously a mix and match of both variants, introducing just too much
confusion.
The prefix are from now on:
* GstMpegts for structures and type names (and not GstMpegTs)
* gst_mpegts_ for functions (and not gst_mpeg_ts_)
* GST_MPEGTS_ for enums/flags (and not GST_MPEG_TS_)
* GST_TYPE_MPEGTS_ for types (and not GST_TYPE_MPEG_TS_)
The rationale for chosing that is:
* the namespace is shorter/direct (it's mpegts, not mpeg_ts nor mpeg-ts)
* the namespace is one word under Gst
* it's shorter (yah)
We cannot do it as the winsys may crash if we initialize too late.
Example, GLX contexts with Intel drivers:
Intel requires the X Display to be the same in order to share GL
resources across GL contexts. These GL contexts are generally
accessed from different threads. Without winsys support it is
nearly impossible to guarentee that concurrent access will not
occur. This concurrent access could result in crashes or abortion
by the winsys (xcb).
https://bugzilla.gnome.org/show_bug.cgi?id=731525
This drops the ugly GstWaylandWindowHandle structure and is much
more elegant because we can now request the display separately
from the window handle. Therefore the window handle can be requested
in render(), i.e. when it is really needed and we can still open
the correct display for getting caps and creating the pool earlier.
This change also separates setting the wl_surface from setting its size.
Applications should do that by calling two functions in sequence:
gst_video_overlay_set_window_handle (overlay, surface);
gst_wayland_video_set_surface_size (overlay, w, h);
This is the initial implementation, without the GstVideoOverlay.expose()
method. It only implements using an external (sub)surface and resizing
it with GstWaylandVideo.
This interface is needed to be able to embed waylandsink into
other wayland surfaces. Due to the special nature of wayland,
GstVideoOverlay is not enough for this job.
Fix routine names for zigzag/raster scan order conversion routines for
quantization matrices. This ought to use the gst_h264_quant_matrix_*()
naming convention instead of gst_h264_video_quant_matrix_*(), which
derived from the MPEG-2 function names.
https://bugzilla.gnome.org/show_bug.cgi?id=731524
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix MPEG-4 and VP8 APIs to export their external symbols as pure C
symbols, i.e. un-mangled for C++.
https://bugzilla.gnome.org/show_bug.cgi?id=731522
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The reshape property was never used.
Replace the draw property with a signal.
Based on patch by Mathieu Duponchelle <mathieu.duponchelle@epitech.eu>
https://bugzilla.gnome.org/show_bug.cgi?id=704507
Fixes a segfault with decodebin ! glmixer where the request pads on
both sides were being requested after the state change to PAUSED.
Also fixes dynamically adding and removing pads while glmixer is
in a state >= PAUSED.
Currently, GstGLWindowWaylandEGL holds the wayland display connection
If we create the EGLDisplay at the GstDisplay creation time, then
libEGL will internally open another connection to the wayland server.
These two display connections are unable to communicate resulting in
no window output/display and hangs inside libEGL.
Eventually we will move the wl_display from GstGLWindow to GstGLDisplay.
Fixes the case where _perform_with_buffer() is called without
intervening calls to _release_buffer() as is the case on start up
with glimagesink.
Also release the buffer when reseting the upload.
https://bugzilla.gnome.org/show_bug.cgi?id=731107
Fixes issues with .so (without numbering) being installed for development
(such as from mesa-dev) but actual driver (with numbering) coming from
some other place (like nvidia drivers)
ATSC has its own version of the EIT table (DVB also has one).
This patch adds parsing for the ATSC EIT table and also fixed
the section identification to mark it as the ATSC one.
The implementation aws refactored to reuse some common internal
structures from ETT.
Also adds its dumping function to ts-parser example
https://bugzilla.gnome.org/show_bug.cgi?id=730435
Adds the system time table structure and functions for convenient parsing of
it and for getting the UTC datetime that it represents. Also adds its
information dumping to the ts-parser example
https://bugzilla.gnome.org/show_bug.cgi?id=730435
ETT (extended text table) contains ATSC text information with descriptions
of virtual channels and events. The text can be internationalized and also
compressed.
https://bugzilla.gnome.org/show_bug.cgi?id=730435
Add a parsing function for MGT and also detect the EIT tables
for ATSC, the EIT pids are reported inside the MGT and we are still
only relying only on the table id for detecting it. In the future we
would want to also check the pid and compare with whatever the MGT
previously reported to confirm that it is indeed the EIT.
https://bugzilla.gnome.org/show_bug.cgi?id=730435
Make the ATSC section parse handle both TVCT and CVCT as they are
nearly the same struct (CVCT uses 2 reserved bits that are ignored
in TVCT).
This is changing the glib type and the struct name but TVCT wasn't
released in a stable package yet so there should be no problem.
Also includes some parsing fixes and changes short_name to be
directly stored as utf8 rather than utf16
https://bugzilla.gnome.org/show_bug.cgi?id=730642
Before:
GST_GL_PLATFORM=cocoa GST_GL_WINDOW=cocoa
gst-launch-1.0 videotestsrc ! glimagesink
After:
GST_GL_PLATFORM=cgl GST_GL_WINDOW=cocoa
gst-launch-1.0 videotestsrc ! glimagesink
but still pass --enable-cocoa to configure script
because currently it can only be used with cocoa API.
We could later have cgl/gstglcontext_cgl.h that manages
a CGLContextObj directly and cocoa/gstglcontext_cocoa.h
would just wrap it.
So that it could be used with other Apple's window APIs.
https://bugzilla.gnome.org/show_bug.cgi?id=729245
Add a new function to calculate video stream framerate which rely on
SPS, slice header and pic timing using formula:
time_scale 1 1
fps = ----------------- x --------------- x ------------------------
num_units_in_tick DeltaTfiDivisor (field_pic_flag ? 2 : 1)
See section E2.1 of H264 specification for definition of variables.
https://bugzilla.gnome.org/show_bug.cgi?id=723352
In a pipeline like so:
videotestsrc ! gleffects ! videoconvert ! sink
gleffects was simply passing the videoconvert bufferpool to videotestsrc
and not creating a glbufferpool. videobufferpool would then fail
to allocate from the glallocator.
From d4bcef3204 on, using a RGBA
texture to hold the data causes the glmemory to have half width
and a scaling of [2, 1]. Using a LA texture solves this problem
however cannot be attached to the framebuffer for copying into
a RGBA texture. Which will be solved by moving to EXT_texture_rg.
https://bugzilla.gnome.org/show_bug.cgi?id=728890
Previously it would only work if the alpha value was in the last
component (RGBx, BGRx). Now it works wherever the alpha value may
be (xRGB, xBGR, etc).
This fixes the OSX build and any builds with --disable-egl. That issue
was introduced in "glfilter: rewrite transform_caps to preserve caps fields".
https://bugzilla.gnome.org/show_bug.cgi?id=729861
Setting a scaled factor for X coordinate is not enough as the indexer
will still think stride is shorter and will not fully skip it. Instead,
update width, so the lines are as expected. Combined with the scale, it
will hide the cropped portion.
https://bugzilla.gnome.org/show_bug.cgi?id=729896
Fix a regression introduced recently with the lazy init.
It was happening when calling gst_video_gl_texture_upload_meta_upload
from an aplication. So not using gst_gl_upload_perform_with_buffer.
gst_video_frame_map() will store an updated video info base
on the video meta. In order to have the right stride and offset
we should update that video info accordingly.