In the case where you have a source giving the GstAggregator smaller
buffers than it uses, when it reaches a timeout, it will consume the
first buffer, then try to read another buffer for the pad. If the
previous element is not fast enough, it may get the next buffer even
though it may be queued just before. To prevent that race, the easiest
solution is to move the queue inside the GstAggregatorPad itself. It
also means that there is no need for strange code cause by increasing
the min latency without increasing the max latency proportionally.
This also means queuing the synchronized events and possibly acting
on them on the src task.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
We can't know if the GstGLUpload type is initialized at this point already,
and thus our debug category might not be initialized yet... and cause an
assertion here.
As we don't print debug output for any of the other transform functions, let's
defer this problem for now.
Before aggregator based elements always started at running time 0,
now it's possible to select the first input buffer running time or
explicitly set a start-time value.
https://bugzilla.gnome.org/show_bug.cgi?id=749966
Adding a pad will add a new upstream that might have a bigger minimum latency,
so we might have to wait longer. Or it might be the first live upstream, in
which case we will have to start deadline based aggregation.
Removing a pad will remove a new upstream that might have had the biggest
latency, so we can now stop waiting a bit earlier. Or it might be the last
live upstream, in which case we can stop deadline based aggregation.
The coordinate are relative to the texture dimension and not
the window dimension now. There is no need to pass the window
dimension or to update the overlay if the dimension changes.
https://bugzilla.gnome.org/show_bug.cgi?id=745107
If a ContentProtection element is present in an AdaptationSet element,
send Protection events on the source pad, so that qtdemux can use this
information to correctly generate its source caps for DASH CENC
encrypted streams.
This allows qtdemux to support CENC encrypted DASH streams where the
content protection specific information is carried in the MPD file
rather than in pssh boxes in the initialisation segments.
This commit adds a new function to the adaptivedemux base class to allow
a GstEvent to be queued for a stream. The queue of events are sent the
next time a buffer is pushed for that stream.
https://bugzilla.gnome.org/show_bug.cgi?id=705991
They require to get_proc_address some functions through the
platform specific {glX,egl}GetProcAddress rather than the default
GL library symbol lookup.
nvidia drivers return the exact version in glGstString (GL_VERSION)
we request on creation so start with the highest known version and
work our way down.
The previous approach of traversing the other_context weak ref tree was
1. Less performant
2. Incorrect for context destruction removing a link in the tree
Example of 2:
c1 = context_create (NULL)
c2 = context_create (c1)
c3 = context_create (c2)
context_can_share (c1, c3) == TRUE
context_destroy (c2)
unref (c2)
context_can_share (c1, c3) returns FALSE when it should be TRUE!
This does not remove the restriction that context sharedness can only
be tracked between GstGLContext's.
This will deadlock if the main thread is the one who creates the GstGLContext.
All things we call from the main thread should be possible from any thread.
https://bugzilla.gnome.org/show_bug.cgi?id=751101
Sometimes the last fragment does not exist because of rounding errors with the
durations. Just finish the stream gracefully instead of erroring out instead.
Segment start/time/position/base should only be modified if this is the first
time we send a segment, otherwise we will override values from the seek
segment if new streams have to be exposed as part of the seek.
Segment base should be calculated from the segment start based on the stream's
own segment, not the demuxer's segment. Both might differ slightly because of
the presentationTimeOffset.
Always add the presentationTimeOffset (relative to the period start, not
timestamp 0) to the segment start after resetting the stream's segment based
on the demuxer's segment (i.e. after seeks or stream restart). Also make sure
to keep the stream's segment up to date and not just send a new segment event
without storing the segment in the stream.
https://bugzilla.gnome.org/show_bug.cgi?id=745455
And include the presentation offset in the last known position for each
stream, and just because we can also keep track of the latest known position
inside the demuxer segment.
It's going to return EOS if the period ended or otherwise there is just no
next fragment left. If we don't store the last return value, it will always
stay OK and gst_adaptive_demux_combine_flows() will always return OK instead
of EOS once all streams are done.
This partially switches period changes in DASH by at least trying to switch
instead of just stopping. What is still left is that after a period change
with DASH the times all start at 0 again instead of continuing.
The problem here was that after removing the formats and
all the things we could convert, we then intersected these
caps with the template caps.
Hence if a subclass offered permissive sink templates
(eg all the possible formats videoconvert handles), but only
one output format, then at negotiation time getcaps returned
caps with the format restricted to that format, even though
we do handle conversion.
https://bugzilla.gnome.org/show_bug.cgi?id=751255
- add data pointer to GstJpegSegment and pass segment
to all parsing functions, rename accordingly
- shorten GstJpegMarkerCode enum type name to GstJpegMarker
- move function gtk-doc blurbs into .c file
- add since markers
- flesh out docs for SOF markers
https://bugzilla.gnome.org/show_bug.cgi?id=673925
Fix scan for next marker code when there is an odd number of filler
(0xff) bytes before the actual marker code. Also optimize the loop
to execute with fewer instructions (~10%).
This fixes parsing for Spectralfan.mov.
The size of a marker segment is defined to be exclusive of any initial
marker code. So, fix the size for SOI, EOI and APPn segments but also
the size of any possible segment that is usually "reserved" or not
explicitly defined.
https://bugzilla.gnome.org/show_bug.cgi?id=707447
Add API for a helper object that can convert between different
stereoscopic video representations, and later do filtering
of multiple view streams.
https://bugzilla.gnome.org/show_bug.cgi?id=611157
Switch the increment of markersize from when it is used to when it is
returned from compute_resync_marker_size.
This also makes the CHECK_REMAINING in gst_mpeg4_parse_video_packet_header
check for the actually required number of bits now and not one too few.
https://bugzilla.gnome.org/show_bug.cgi?id=739345
This reverts commit 916b954315.
Clearly something else was intended, and it also makes
more sense to add the extra bit. The resync marker is
N zero bits plus a 1 bit, and the pattern/mask needs to
be run on N+1 bits too.
(Even after the rever the code doesn't do that of course, so
it still needs to be fixed differently.)
https://bugzilla.gnome.org/show_bug.cgi?id=739345
When set, it causes videoaggregator to repeatedly aggregate the last buffer on
an EOS pad instead of skipping it and outputting silence. This is useful, for
instance, while playing back files seamless one after the other, to avoid
videoaggregator ever outputting silence (the checkerboard pattern).
It is to be noted that if all the pads on videoaggregator have this property set
on them, the mixer will never forward EOS downstream for obvious reasons. Hence,
at least one pad with 'ignore-eos' set to FALSE must send EOS to the mixer
before it will be forwarded downstream.
https://bugzilla.gnome.org/show_bug.cgi?id=748946
Short sections have 3 bytes of common header, while other sections
have 8 bytes of common header. If packetizing common header of short
section, we should stop after the first 3 bytes.
https://bugzilla.gnome.org/show_bug.cgi?id=735653
Provides generic handling of GL buffer objects accessible using
the GL bind points (GL_ARRAY_BUFFER, GL_PIXEL_*_BUFFER).
Implementation based off the current GstGLMemory.
5697b6b89b causes us to possibly listen
on a toolkit provided Display connection. We thus could eat their
precious winsys events. Only listen if we need to
(!foreign_display or videooverlay).
It's true that we shouldn't consider errors fatal immediately, but if we
always ignore them we will loop infinitely on live streams with segments
that can't be downloaded at all.
Add a vfunc that is called by glfilter before it sets
caps features and intersects with the peer caps, and
move removing the size from caps into its default
implementation. Allows sub-classes to do more
sophisticated management of the size fields in case they
don't support arbitrary resizing or have distinct
preferences.
Add preserve_update_caps_result boolean on the class to allow
sub-classes to disable videoaggregator removing sizes and framerate
from the update_caps() return result.
A return value of GST_FLOW_OK with a NULL buffer from get_output_buffer()
means the sub-class doesn't want to produce an output buffer, so
skip it.
If gst_videoaggregator_do_aggregate() generates an error, make sure
to propagate it - don't just ignore and discard the error by
over-writing it with the gst_pad_push() result.
Previously when compiling GstGL with both GL and GLES2,
GL_RGBA8 was picked from GL/gl.h. But a clash may happen at
runtime when one is selecting GLES2.
gst_gl_internal_format_rgba allows to check at runtime
if it should use GL_RGBA or GL_RGBA8.
Simple implementation split from GstGLWindowWayland
Can now have multiple glimagesink elements all displaying output
linked via GL or otherwise (barring GL platform limitations).
The intel driver is racy and can crash setting up the two glimagesink contexts.
e.g.
videotestsrc ! tee name=t ! queue ! glupload ! glimagesinkelement
t. ! queue ! gleffects_blur ! glimagesinkelement
videotestsrc ! glupload ! glfiltercube ! tee name=t ! queue ! glimagesinkelement
t. ! queue ! gleffects_blur ! glimagesinkelement
Otherwise we could end up being mistaken for the diference between a
gl3 and a gl2 context resulting in a failure getting the list of
extensions from the wrapped context due to the difference between
glGetString and glGetStringi for the GL_EXTENSIONS token.
https://bugzilla.gnome.org/show_bug.cgi?id=749728
When called from gst_gl_window_win32_close(), internal window
could not exist, and if it does it's going to be destroyed just
after that anyway. Also it causes window_proc() to be called
and crash because it gets a NULL context.
When called from gst_gl_window_win32_set_window_handle() we are
going to set another parent anyway, and it's probably better to
reparent directly instead of passing by a NULL parent which could
cause the internal window to popup briefly.
https://bugzilla.gnome.org/show_bug.cgi?id=749601
gst_gl_context_finalize() is calling gst_gl_window_win32_quit()
which was posting a message. But then window_proc takes window's
context and get a NULL.
Now that we've got a GMainLoop we can do like other backends and
simply call g_main_loop_quit().
This also remove duplicated code to release the parent window and
potential crash there because parent_proc could be NULL if we never
created the internal window. That could happen for example if setting
state to READY then setting a window_handle, and go back to NULL state.
https://bugzilla.gnome.org/show_bug.cgi?id=749601
gst_gl_window_win32_send_message_async() could be called before the
internal window is created so we cannot use PostMessage there.
x11 and wayland backends both create a custom GSource for this,
so there is no reason to not do that for win32.
https://bugzilla.gnome.org/show_bug.cgi?id=749601
Otherwise it could stay client side without being submitted to the GL
server resulting in another context waiting on a Fence that will never
become signalled causing a deadlock.
Make the passthrough check contingent on only the fields we
can modify being unchanged, and pre-compute it when caps
change instead of checking on each buffer. Makes the passthrough
more lenient if consumers are lax about making input and output
caps complete.
The EOS and EOB nals have the size 2 which is the size of
nal unit header itself. The gst_h265_parser_identify_nalu()
is not required to scan start code again in this case.
In other cases, for a valid nalunit the minimum required size
is 3 bytes (2 byte header and at least 1 byte RBSP payload)
Skip the byte alignment bits as per the logic of byte_alignment()
provided in hevc specification. This will fix the calculation of
slice header size.
https://bugzilla.gnome.org/show_bug.cgi?id=747613
Even for "live" streams we are not live in the GStreamer meaning of the word.
We don't produce buffers that are timestamped based on their "capture time"
and our clock, but just based on whatever timestamps the stream might contain.
Also even if we wanted to claim to be live, that wouldn't work well as we
would have to return GST_STATE_CHANGE_NO_PREROLL when going from READY to
PAUSED, which we can't. We first need data to know if we are "live" or not.
It will deadlocks as we will then join() the update task from itself. Instead
just post an actual error message on the bus and only stop the update task.
The application is then responsible for shutting down the element, and thus
all the other tasks and everything, based on the error message it gets.
This would've also triggered if for some reason the segment was updated
in such a way that PTS went backwards, but the running time increased. Like
what happens when non-flushing seeks are done.
We're doing a proper buffer-from-the-past check a few lines below based on the
running time, which is the only time we should care about here.
And keep on querying upstream until we get a reply.
Also, the _get_latency_unlocked() method required being calld
with a private lock, so removed the _unlocked() variant from the API.
And it now returns GST_CLOCK_TIME_NONE when the element is not live as
we think that 0 upstream latency is possible.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
It might return OK from subclasses and it could cause a bitrate
renegotiation. For DASH and MSS that is ok as they won't expose
new pads as part of this but it can cause issues for HLS as
it will expose new pads, leading to pads that will only have EOS
that cause decodebin to fail
https://bugzilla.gnome.org/show_bug.cgi?id=745905
Show the DispmanX window only if there's no shared external GL context
set up. When a window is required by the context a transparent
DispmanX element is created and later on made visible by the ::show
method.
https://bugzilla.gnome.org/show_bug.cgi?id=746632
In some upload implementations the out buffer has more than one references,
turning the buffer not writable, so it won't be possible to modify its
meta-data.
This patch moves the meta-data copy before increasing the reference of the out
buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=746173
Asks the subclass for a potential time offset to apply to each
separate stream, in dash streams can have "presentation time offsets",
which can be different for each stream.
https://bugzilla.gnome.org/show_bug.cgi?id=745455
Chaining a downstream pool would lead to two owner of the same
pool. In dynamic pipeline, if one owner is removed from the pipeline
the pool will be stopped, and the rest of the pipeline will fail
since the pool will now be flushing. Also fix proposed pool caching,
filter->pool was never set, never unrefed.
https://bugzilla.gnome.org/show_bug.cgi?id=745705
In case the original caps were missing some optional fields like
interlace-mode. We assume default values for those everywhere,
but they can still cause negotiation to fail if a downstream element
expects the field to be there and at a specific value.
Otherwise the pipeline stalls when running
more than one glimagesink with gst-launch.
Also only register the custom nsapp loop
when setting up the nsapp from gstgl.
We also need to recalculate the offset, since otherwise the frame
mapping will be forward two lines in the U and V planes (I420) due
to gst_video_info_align() round up the Y plane to a even number of
lines.
https://bugzilla.gnome.org/show_bug.cgi?id=745054
Make sure we support offset and video alignment when downloading too.
This is currently not used (plane_start is always 0), but it makes
the code correct if we want to use that later.
Provide the right size to GL when uploading. Using maxsize is wrong
since we offset the data point with the memory offset and video
alignement offset.
https://bugzilla.gnome.org/show_bug.cgi?id=744246
When the memory is partial copy, the texture size and videoinfo no
longer make sense. As we cannot guess what the application wants, we
safely copy into a sysmem memory.
https://bugzilla.gnome.org/show_bug.cgi?id=744246
This implements support for GstAllocationParams and memory alignments.
The parameters where simply ignored which could lead to crash on
certain platform when used with libav and no luck.
https://bugzilla.gnome.org/show_bug.cgi?id=744246
When trying to render buffers with meta:GLTextureUpload the glimagesink crashes
with a segmentation fault.
This patch workarounds this crash setting to NULL the method implementation
after free.
https://bugzilla.gnome.org/show_bug.cgi?id=745206
When setting a new window handle, we need to ensure all implementations
will detect the change.
For that we deactivate the context before setting the window handle, then
reactivate the context
https://bugzilla.gnome.org/show_bug.cgi?id=745090
When (re)activating the context, the backing window handle might have changed.
If that happened, destroy the previous surface and create a new one
https://bugzilla.gnome.org/show_bug.cgi?id=745090
Causes the following warning on clang:
gst-dvb-section.c:567:36: error: format specifies type 'unsigned long' but the argument has type 'int' [-Werror,-Wformat]
descriptors_loop_length, end - 4 - data);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
This fix a very slow rendering rate regression that only
happens when using gst-launch, i.e. in the case where
the main thread does not run any NSApp loop.
Git bisect reported it has been introduced by the commit
e10d2417e2:
"move to CGL and CAOpenGLLayer for rendering".
Then the commit 7d46357627:
"gstglwindow_cocoa: fix slow render rate" attempted to fix
the slow rendering rate problem when using gst-launch.
At least for me it does not work. I tried several
combinations, for example to flush CA transactions in the
custom app loop, as mentioned in the doc, but the only solution
that fixes the slow rendering is by reducing the loop latency.
From what I tested, no need to put less than 60ms, even if the
framerate has an interval much lower (16.6ms for 60 fps).
Anytime else, we have no idea how to match up map and unmaps.
We also don't know exactly how the calling code is using us.
Also fixes the case where we're trying to transfer while someone else
is accessing our data pointer or texture resulting in mismatched video
frames.
https://bugzilla.gnome.org/show_bug.cgi?id=744839
One has to use the src_lock anyway to protect the min/max/live so they
can be notified atomically to the src thread to wake it up on changes,
such as property changes. So no point in having a second lock.
Also, the object lock was being held across a call to
GST_ELEMENT_WARNING, guaranteeing a deadlock.
While gst_aggregator_iterate_sinkpads() makes sure that every pad is only
visited once, even when the iterator has to resync, this is not all we have
to do for querying the latency. When the iterator resyncs we actually have
to query all pads for the latency again and forget our previous results. It
might have happened that a pad was removed, which influenced the result of
the latency query.
It was between another function and its helper function before, which was
confusing when reading the code as it had nothing to do with the other
functions.
This lock is not what is commonly known as a "stream lock" in GStremer,
it's not recursive and it's taken from the non-serialized FLUSH_START event.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Move the property from subclasses to adaptivedemux, it allows
selecing the percentage of the measured bitrate to be used when
selecting stream bitrates
Allows to set a bitrate directly instead of measuring it internally
based on the received chunks. The connection-speed was removed from
mssdemux and hlsdemux as it is now in the base class
Don't use private GMutex implementation details to check
whether it has been freed already or not. Just clear mutex
and GCond unconditionally in free function, they are always
inited anyway, and the free function can't be called multiple
times either.
steal_buffer() + unref seems to be a wide-spread idiom
(which perhaps indicates that something is not quite
right with the way aggregator pad works currently).
Where possible, use the _OBJECT variants in order to track better from
which object the debug statement is coming from
Define (and use) GST_CAT_DEFAULT where applicable
Use GST_PTR_FORMAT where applicable
And use the average to go up in resolution, and the last fragment
bitrate to go down.
This allows the demuxer to react rapidly to bitrate loss, and
be conservative for bitrate improvements.
+ Add a construct only property to define the number of fragments
to consider when calculating the average moving bitrate.
https://bugzilla.gnome.org/show_bug.cgi?id=742979
If the src framerate and videoaggreator's output framerate were
different, then we were taking every single buffer that had duration=-1
as it came in regardless of the buffer's start time. This caused the src
to possibly run at a different speed to the output frames.
https://bugzilla.gnome.org/show_bug.cgi?id=744096
In gst_gl_filter_fixate_caps () it can goto done without freeing the memory of
the tmp GstStructure. This makes it go out of scope and leak.
CID #1265765