When all fragments have already been downloaded on a live stream
dashdemux would busy loop as the default implementation of
has_next_fragment would return TRUE. Implement it to correctly
signal if adaptivedemux should wait for the manifest update before
trying to get new fragments.
When updating the manifest the timestamps on it might have changed a little
due to rounding and timescale conversions. If the change makes the timestamp
of the current segment to go up it makes dashdemux reposition to the previous
one causing one extra unnecessary download.
So when repositioning add an extra 10 microseconds to cover for that rounding
issues and increase the chance of falling in the same segment.
Additionally, also improve the time used when the client is already after the
last segment. Instead of using the last segment starting timestamp use the
final timestamp to make it reposition to the next one and not to the one that
has already been downloaded.
These functions of directly getting and setting segment indexes
are no longer useful as now we need 2 indexes: repeat and segment
index.
The only operations needed are advance_segment, going back to the
first one or seeking for a timestamp.
Segments are now stored with their repeat counts instead of spanding
them to multiple segments. This caused advancing to the next segment
using a single index to have to iterate over the whole list every time.
This commit addresses this by storing both the segment index as well
as the repeat index and makes advancing to next segment just an
increment of the repeat or the segment index.
Use a single segment to represent it internally to avoid using too
much memory. This has the drawback of issuing a linear search to
find the correct segment to play but this can be fixed by using
binary searches or caching the current position and just looking
for the next one.
https://bugzilla.gnome.org/show_bug.cgi?id=748369
The custom code is wrong as it ignores the templates, which leads to
missing fields in the result. Instead, simply use the default get_caps
implementation which does it correctly (get the template, intersect
with filter and return).
https://bugzilla.gnome.org/show_bug.cgi?id=749237
Without this, we will fixate weird pixel-aspect-ratios like 1/2147483647. But
in the end, all the negotiation code in videoaggregator needs a big cleanup
and videoaggregator needs to get rid of the software-mixer specific things
everywhere.
Upstream might not give us a caps event (dtlssrtpdec) because it might be an
RTP/RTCP mixed stream, but we split the two streams anyway and should report
proper caps downstream if possible.
Fixes "sticky event misordering" warnings with dtlssrtpdec.
And provide home-made fallback for older GLib versions,
so that we can later find these and remove them when
we bump the GLib requirement (which is certainly going
to happen before 2.0).
https://bugzilla.gnome.org/show_bug.cgi?id=748495
It's better to just select some random variant playlist instead of stopping,
chances are that it's still continuing to work and we might just have to
select a different variant again later.
We should only refresh the currently selected variant playlist (if any,
otherwise the main playlist), not the main playlist. And only try to
refresh the main playlist if updating the variant playlist fails.
Some servers (Wowza) use the request of the main playlist to create a
"session", which is then part of the URI of the variant playlist and
also the fragments. Refreshing the main playlist would generate a new
session, and the server rate limits that usually. And after a few retries
the server just kicks us out.
Also as a side effect we now use the same downloader for all playlists, so
that we only have 2 instead of 3 connections to the server. And also
previously we just ignored the downloaded data from the main playlist that
the base class gave to us.
When the segment is very short it might be the case that the
typefinding fails and when finishing the segment hlsdemux would
consider the remaining data (pending_buffer) as an encryption
leftover.
This patch fixes it and makes sure an error is properly posted
if typefind failed by refactoring buffer handling to a function
and using it from the data_received and finish_fragment functions.
We also have to update the current_file GList pointer in the M3U playlist
client, otherwise we are just continuing playback from the current position
instead of seeking.
Variable hands is already checked to contain a value previously at the beginning
of the current block. There is no need to check again. This is logically dead code.
CID 1197693
Caps refcounting was all wrong in this function. Rewrote it and add some
comments to make it clearer.
Fix caps leaks with the
validate.file.glvideomixer.simple.play_15s.synchronized scenario.
https://bugzilla.gnome.org/show_bug.cgi?id=747915
Signed-off-by: Guillaume Desmottes <guillaume.desmottes@collabora.co.uk>
If old opencv1-style legacy include directory is available,
this change becomes purely cosmetic (maybe will compile a bit faster).
It becomes an FTBFS fix when opencv1-style include directory is missing
(possibly because opencv package maintainer decided not to pack it).
https://bugzilla.gnome.org/show_bug.cgi?id=747705
Fix a caps leak with the
validate.file.glvideomixer.simple.play_15s.synchronized scenario.
https://bugzilla.gnome.org/show_bug.cgi?id=747915
Signed-off-by: Guillaume Desmottes <guillaume.desmottes@collabora.co.uk>
'array_buffers' contain borrowed GstBuffer and so shouldn't have a free
function. 'frames' is the one containing GstGLMixerFrameData and so should use
_free_glmixer_frame_data as free function.
Fix GstGLMixerFrameData leaks with the
validate.file.glvideomixer.simple.play_15s.synchronized scenario.
https://bugzilla.gnome.org/show_bug.cgi?id=747913
Signed-off-by: Guillaume Desmottes <guillaume.desmottes@collabora.co.uk>
Fix a simple buffer overflow - 16 bytes isn't enough to hold
the string representation of a gulong on x86_64. I guess the
intent was to generate a 32 bit random key, so let's do that.
Only matters if anyone ever ports the sink to 1.x
https://bugzilla.gnome.org/show_bug.cgi?id=676524
There is a playback error when trying to play a content that
has 'application' mimeType. This commit prevents an exception from
setup text streams.
https://bugzilla.gnome.org/show_bug.cgi?id=747525
As mentionned in release notes : Added new Sps/Pps strategies for real-time
video (replace the old setting variable 'bEnableSpsPpsIdAddition' with
'eSpsPpsIdStrategy')
upstream might send buffer lists instead of buffers and hlssink's
probe won't get called and a new segment won't be created when needed.
This patch fixes it by adding a chain_list function to the sink pad
that will just pass through the whole bufferlist if no segment needs
to be requested at the moment or convert the list into buffers to
check the proper timestamp to request the next key-unit that will
start the segment.
https://bugzilla.gnome.org/show_bug.cgi?id=746906
This way we let opusdec do the resampling if needed and don't carry
around buffers with a too high sample rate if not required.
While Opus always uses 48kHz internally, this information from the
header specifies which frequencies are safe to drop.
No need to ref/unref the connection every time we push something on the pool.
However we have to provide non-NULL data to the pool, so let's just give it
some coffee.
This way we will share threads with other DTLS connections if possible, and
don't have to start/stop threads for timeouts if there are many to be handled
in a short period of time.
Also use the system clock and async waiting on it for scheduling the timeouts.
GST_DTLS_USE_GST_LOG is not defined anywhere, so
we'd just log into the default category by accident.
We use the gst logging system unconditionally now,
so might just as well remove this #if #else.
gcc-4.9.2:
gstdtlsagent.c:114:1: error: old-style function definition
gstdtlsconnection.c:253:3: error: ISO C90 forbids mixed declarations and code
gstdtlsconnection.c:291:3: error: ISO C90 forbids mixed declarations and code
gstdtlsconnection.c:391:3: error: ISO C90 forbids mixed declarations and code
gstdtlsconnection.c:434:3: error: ISO C90 forbids mixed declarations and code
gstdtlsconnection.c:773:1: error: 'BIO_s_gst_dtls_connection' was used with no prototype before its definition
gstdtlsconnection.c:773:1: error: old-style function definition
gstdtlsconnection.c:128:32: error: passing 'const char [30]' to parameter of type 'void *'
discards qualifiers [-Werror,-Wincompatible-pointer-types-discards-qualifiers]
SSL_get_ex_new_index (0, "gstdtlsagent connection index", NULL, NULL,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/openssl/ssl.h:1981:43: note: passing argument to parameter 'argp' here
int SSL_get_ex_new_index(long argl, void *argp, CRYPTO_EX_new *new_func,
^
gstdtlsconnection.c:822:40: error: arithmetic on a pointer to void is a GNU extension
[-Werror,-Wpointer-arith]
memcpy (out_buffer, priv->bio_buffer + priv->bio_buffer_offset, copy_size);
~~~~~~~~~~~~~~~~ ^
In some upload implementations the out buffer has more than one references,
turning the buffer not writable, so it won't be possible to modify its
meta-data.
This patch moves the meta-data copy before increasing the reference of the out
buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=746173
glupload ! glcolorconvert ! sink
Some properties are manually forwarded. The rest are available using
GstChildProxy.
The two signals are forwarded as well.
It encapsulates a confiurable GL processing element in the
upload/colorconvert/download dance required to transparently process
the majority of GstBuffer's.
GLImage does not use any kind of internal pool. There was some
remaining code and comment stating that it was managing the
pool, and it was in fact setting the active state when doing
to ready state.
* Only create the pool if requested and in propose_allocation
* Cache the pool to avoid reallocation on spurious reconfigure
* Don't try to deactivate the pool (we don't own it)
https://bugzilla.gnome.org/show_bug.cgi?id=745705
If searchIdx() doesn't find the id it returns -1, which breaks
motioncelssvector.at (idx). Check for it and return if not found.
Changing a few other lines for style consistency.
The max latency parameter is "the maximum time an element
synchronizing to the clock is allowed to wait for receiving all
data for the current running time" (docs/design/part-latency.txt).
https://bugzilla.gnome.org/show_bug.cgi?id=744338
LibJPEG uses macroblock of 8x8 sample. In this element we use RGB and
Y444, two 24bit formats that are stored in 32bit pixels. This mean we
have 32x32 bytes macroblocks. For this reason, we need to allocate
our buffer slightly larger. We also need to pass the line pointer in
the right order, otherwise the image endup upside-down.
https://bugzilla.gnome.org/show_bug.cgi?id=745109
Using mkstemp without setting the permission mask is potentially harmful.
POSIX specification of mkstemp() does not say anything about file modes, so we
need to make sure its file mode creation mask is set appropriately before
calling it.
This implements support for GstAllocationParams and memory alignments.
The parameters where simply ignored which could lead to crash on
certain platform when used with libav and no luck.
https://bugzilla.gnome.org/show_bug.cgi?id=744246
+ Split headers from source
+ Remove uneeded AM_CFLAGS, AM_LDFLAGS
+ Always set OBJCFLAGS
Due to the presence of a .m and regardless of the conditional values,
automake will promote the link command to OBJC using OBJCFLAGS. Only
the basic flags (like warnings and optimization) are going to make a
difference though.
This cleanup builds up the makefile with less specific files first
toward more specific file. FLAGS are built with the basic that unused
flags will have empty variable.
i686-apple-darwin11-llvm-gcc-4.2
gstglmixer.h:43: error: redefinition of typedef ‘GstGLMixer’
gstglmixerpad.h:32: error: previous declaration of ‘GstGLMixer’ was here
gstglmixer.h:46: error: redefinition of typedef ‘GstGLMixerFrameData’
gstglmixerpad.h:33: error: previous declaration of ‘GstGLMixerFrameData’ was here
The graphene-1.0 part should not be in the source code. This directory
is part of the cflags include. This is similar to gstreamer-1.0/
directory. This break compilation if the include directory where
graphene is installed is not in your include path.
Bitrate-limit is already available in the baseclass and, even though
the bandwidth-usage name is better, hls and mss already used
bitrate-limit. This patch deprecates the bandwidth-usage and maps
it to the baseclass bitrate-limite.
Move the property from subclasses to adaptivedemux, it allows
selecing the percentage of the measured bitrate to be used when
selecting stream bitrates
Allow the playlist-length to accept '0' as a value, indicating
that no segment should be removed from the playlist. This allows
generating playlists to be used as VOD when complete.
Allows to set a bitrate directly instead of measuring it internally
based on the received chunks. The connection-speed was removed from
mssdemux and hlsdemux as it is now in the base class
By implementing get_live_seek_range.
As shown by :
gst-validate-1.0 playbin \
uri=http://dev-iplatforms.kw.bbc.co.uk/dash/news24-avc3/news24.php
This patch handles live seeking, by setting a live seek range
comprised between now - timeShiftBufferDepth and now.
The inteersting thing with this stream is that one can actually
ask fragments up to availabilityStartTime, but it seems quite clear
in the spec that content is only guaranteed to exist up to
timeShiftBufferDepth.
One can test live seeking this way :
gst-validate-1.0 playbin \
uri=http://dev-iplatforms.kw.bbc.co.uk/dash/news24-avc3/news24.php \
--set-scenario seek_back.scenario
with scenario being:
description, seek=true
seek, playback-time=position+5.0, start="position-600.0",
flags=accurate+flush
This example will play the stream, wait for five seconds, then seek back
to a position 10 minutes earlier.
https://bugzilla.gnome.org/show_bug.cgi?id=744362
Add parsed/framed=true to allow negotiation with some
muxers that required parsed input. Encoders already provide
parsed/framed output so it should say so in caps.
Some variables are not initialized in the constructor. It is highly unlikely
they are used before being set, but it is safer to initialize them.
CID #1197704
Allows finer grain decisions about formats and features at each
stage of the pipeline.
Also provide propose_allocation for glupload besed on the supported
methods.
Make GstGLMemory hold the texture target (tex_target) the texture it represents
(tex_id) is bound to. Modify gst_gl_memory_wrapped_texture and
gst_gl_download_perform_with_data to take the texture target as an argument.
This change is needed to support wrapping textures created outside libgstgl,
which might be bound to a target other than GL_TEXTURE_2D. For example on OSX
textures coming from VideoToolbox have target GL_TEXTURE_RECTANGLE.
With this change we still keep (and sometimes imply) GL_TEXTURE_2D as the
target of textures created with libgstgl.
API: modify GstGLMemory
API: modify gst_gl_memory_wrapped_texture
API: gst_gl_download_perform_with_data
Depending on the platform, it was only ever implemented to 1) set a
default surface size, 2) resize based on the video frame or 3) nothing.
Instead, provide a set_preferred_size () that elements/applications
can use to request a certain size which may be ignored for
videooverlay/other cases.
Add more power to the chunk_received function (renamed to data_received)
and also to the fragment_finish function.
The data_received function must parse/decrypt the data if necessary and
also push it using the new push_buffer function that is exposed now. The
default implementation gets data from the stream adapter (all available)
and pushes it.
The fragment_finish function must also advance the fragment. The default
implementation only advances the fragment.
This allows the subsegment handling in dashdemux to continuously download
the same file from the server instead of stopping at every subsegment
boundary and starting a new request
gstdashdemux.c:1330:13: error: implicit conversion from enumeration type 'enum _GstAdaptiveDemuxFlowReturn' to different enumeration type
'GstFlowReturn' [-Werror,-Wenum-conversion]
ret = GST_ADAPTIVE_DEMUX_FLOW_SUBSEGMENT_END;
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
gmyth seems to be unmaintained upstream, and no one has asked
for this to be ported for a very long time, so let's just
remove it. Neither debian nor Fedora seem to ship libgmyth
any longer, and in any case it's most likely deprecated by
the UPnP support in MythTV.
The segment start time is calculated as the offset into the current segment.
The old condition to detect the end of period (i.e. segment start time >
period start + period duration) failed when the period start was not 0 since
the segment start time does not take the period start time into account.
Fix this detection by only comparing the segment start to the period duration.
https://bugzilla.gnome.org/show_bug.cgi?id=733369
The ISOBMFF profile allows definind subsegments in a segment. At those
subsegment boundaries the client can switch from one representation to
another as they have aligned indexes.
To handle those the 'sidx' index is parsed from the stream and the
entries point to pts/offset of the samples in the stream. Knowing that
the entries are aligned in the different representation allows the client
to switch mid fragment. In this profile a single fragment is used per
representation and the subsegments are contained in this fragment.
To notify the superclass about the subsegment boundary the chunk_received
function returns a special flow return that indicates that. In this case,
the super class will check if a more suitable bitrate is available and will
change to the same subsegment in this new representation.
It also requires special handling of the position in the stream as the
fragment advancing is now done by incrementing the index of the subsegment.
It will only advance to the next fragment once all subsegments have been
downloaded.
https://bugzilla.gnome.org/show_bug.cgi?id=741248
The old code was using gst_caps_normalize() and was generally overly
complex. Simplify by picking sample rate and number of channels from
upstream and the sample format from the allowed caps. If the format caps
is a list of strins, just pick the first one. And if the srcpad isn't
linked yet, use the default format (S16).
https://bugzilla.gnome.org/show_bug.cgi?id=740195
Optimize loop by moving condition outside of it and reuse the
find_next_fragment function to check if there is next instead of
replicating the same loop
Duration queries can be done a few times per second and would cause
the segment list to be traversed for every one. Caching the duration
prevents that.
Variable hands is already checked to contain a value previously at the beginning
of the current block (in line 504). There is no need to check again. This is
logically dead code.
CID 1197693
The duration values in playlists are approximate only, and for
playlist versions 2 and older they are only rounded integer values.
They cannot be used to timestamp buffers. This resulted in playback
gaps and skips because the actual duration of fragments is slightly
different. The solution is to only set the pts of the very first
buffer processed, not for each fragment.
q->bitrate is a guint64, but G_TYPE_INT may read fewer bits
off the stack, and if we pass more then the NULL sentinel
may not be found at the right place, which in turn might
lead to crashes.
https://bugzilla.gnome.org/show_bug.cgi?id=741751
hlsdemux assumes that seeking is not allowed for live streams,
however seek is possible if there are sufficient fragments in the
manifest. For example the BBC have live streams that contain 2 hours
of fragments.
The seek code for both live and on-demand is common code. The
difference between them is that an offset has to be calculated
for the timecode of the first fragment in the live playlist.
When hlsdemux starts to play a live stream, the possible seek range
is between 0 and A seconds. After some time has passed, the beginning of
the stream will no longer be available in the playlist and the seek
range is between B and C seconds.
Seek range:
start 0 ........... A
later B ........... C
This commit adds code to keep a note of the B and C values
and the highest sequence number it has seen. Every time it updates the
media playlist, it walks the list of fragments, seeing if there is a
fragment with sequence number > highest_seen_sequence. If so, the values
of B and C are updated. The value of B is used when timestamping
buffers.
It also makes sure the seek range is never closer than three fragments
from the end of the playlist - see 6.3.3. "Playing the Playlist file"
of the HLS draft.
https://bugzilla.gnome.org/show_bug.cgi?id=725435
For small amounts some data might be mistyped and it would cause
the pipeline to fail. For example if you have AAC inside mpegts,
for small amounts, the AAC samples would cause the typefinder to
think it is AAC and not mpegts.
https://bugzilla.gnome.org/show_bug.cgi?id=736061
If typefind fails, check to see if the buffer is too short for typefind. If this is the case,
prepend the decrypted buffer to the pending buffer and try again the next time around.
https://bugzilla.gnome.org/show_bug.cgi?id=740458
Corrected the final boundary mechanism so that a final boundary is
added to each mail with multipart content that is sent,
not just to the last one.
https://bugzilla.gnome.org/show_bug.cgi?id=741553
This reverts commit 15394aa705.
The latest release (v1.1) does not have pkg-config support
yet, so this plugin won't be built with the latest release.
Cerbero uses the latest release, so this makes cerbero
builds fail, which expect the plugin to be built.
We can re-commit this once there's a release that includes
pkg-config support.
Rework reverse fragment traversing with repetition fields to prevent
NULL pointer deref and avoid never advancing a fragment as the variable
is unsigned and would always be non-negative.
CID #1257627
CID #1257628
Read the "r" attribute from fragments to support fragments nodes
that use repetition to have a shorter Manifest xml.
Instead of doing:
<c d="100" />
<c d="100" />
You can use:
<c d="100" r="2" />
According to the HLS spec the remainder of the line following
the comma on EXTINF tag is not required. This patch removes
the fake title and saves some bytes on the playlist.
https://bugzilla.gnome.org/show_bug.cgi?id=741096
A context can create a GLsync object that can be waited on in order
to ensure that GL resources created in one context are able to be
used in another shared context without any chance of reading invalid
data.
This meta would be placed on buffers that are known to cross from
one context to another. The receiving element would then wait
on the sync object to ensure that the data to be used is complete.
This gives more flexibility to the subclasses and permits to remove the
GstVideoAggregatorClass->disable_frame_conversion ugly API.
WARNING: This breaks the API as it removes the disable_frame_conversion
field
API:
+ GstVideoAggregatorClass->find_best_format
+ GstVideoAggregatorPadClass->set_format
+ GstVideoAggregatorPadClass->prepare_frame
+ GstVideoAggregatorPadClass->clean_frame
- GstVideoAggregatorClass->disable_frame_conversion
https://bugzilla.gnome.org/show_bug.cgi?id=740768
If we seek when media is in stop state, playback-test gives
critical error, since context of glimagesink is destroyed during stop.
But since context is not present, we need not handle send_event in glimagesink
Hence adding a condition to check if context is valid.
https://bugzilla.gnome.org/show_bug.cgi?id=740305
Otherwise e.g. videotestsrc ! openh264enc ! ... will drop every second frame
because otherwise the target bitrate can't be reached without loosing too
much quality.
gst_glimage_sink_handle_events can be called from the overlay interface and from
the main thread before GL is setup. Before this change, that would call
_ensure_gl_setup() and deadlock on OSX.
Change things so that it's always safe to call gst_glimage_sink_handle_events()
without stuff deadlocking.
Remove gst_glimage_sink_handle_events call in gst_glimage_sink_init. It was
unnecessary and when the element was instantiated from the main thread, caused a
deadlock in OSX creating the context (thread).
Both Firefox and Chrome uses OPUS as the encoding in their SDP.
Adding this now defacto standard name remove the need for special
case in SDP parsing code.
https://bugzilla.gnome.org/show_bug.cgi?id=737810
with force-aspect-ratio=true, if the width or height changed, the
viewport wasn't being updated to respect the new video width and height
until a resize occured.
Otherwise, it is only possible for the sink pads and the src pads to
have the exact same caps features. We can convert from any feature
to another feature so support that.
Otherwise, it is only possible for the sink pads and the src pads to
have the exact same caps features. We can convert from any feature
to another feature so support that.
Do not try to render a buffer that is already being rendered.
This happens typically during the initial rendering stage as the first
buffer is rendered twice: first by preroll(), then by render().
This commit avoids this assertion failure:
CRITICAL: gst_wayland_compositor_acquire_buffer: assertion
'meta->used_by_compositor == FALSE' failed
https://bugzilla.gnome.org/show_bug.cgi?id=738069
Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
If waylandsink is the owner of the display then it is in charge
of catching input events on the surface.
https://bugzilla.gnome.org/show_bug.cgi?id=733682
Signed-off-by: Tifaine Inguere <tifaine.inguere@st.com>
Reviewed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
There are two cases covered here:
1) The GstWlDisplay forces the release of the last buffer and the pool
gets destroyed in this context, which means it unregisters all the
other buffers from the GstWlDisplay as well and the display->buffers
hash table gets corrupted because it is iterating.
2) The pool and its buffers get destroyed concurrently from another
thread while GstWlDisplay is finalizing and many things get corrupted.
The main reason behind this is that when the video caps change and the video
subsurface needs to resize and change position, the wl_subsurface.set_position
call needs a commit in its parent in order to take effect. Previously,
the parent was the application's surface, over which there is no control.
Now, the parent is inside the sink, so we can commit it and change size smoothly.
As a side effect, this also allows the sink to draw its black borders on
its own, without the need for the application to do that. And another side
effect is that this can now allow resizing the sink when it is in top-level
mode and have it respect the aspect ratio.
Because we no longer have a custom buffer pool that holds a reference
to the display, there is no way for a cyclic reference to happen like
before, so we no longer need to explicitly call a function from the
display to release the wl_buffers.
However, the general mechanism of registering buffers to the display
and forcibly releasing them when the display is destroyed is still
needed to avoid potential memory leaks. The comment in wlbuffer.c
is updated to reflect the current situation.
This reduces the complexity of having a custom buffer pool, as
we don't really need it. We only need the custom allocation part.
And since the wl_buffer is no longer saved in a GstMeta, we can
create it and add it on the buffers in the sink's render()
function, which removes the reference cycle caused by the pool
holding a reference to the display and also allows more generic
scenarios (the allocator being used in another pool, or buffers
being allocated without a pool [if anything stupid does that]).
This commit also simplifies the propose_allocation() function,
which doesn't really need to do all these complicated checks,
since there is always a correct buffer pool available, created
in set_caps().
The other side effect of this commit is that a new wl_shm_pool
is now created for every GstMemory, which means that we use
as much shm memory as we actually need and no more. Previously,
the created wl_shm_pool would allocate space for 15 buffers, no
matter if they were being used or not.
This also removes the GstWlMeta and adds a wrapper class for wl_buffer
which is saved in the GstBuffer qdata instead of being a GstMeta.
The motivation behind this is mainly to allow attaching wl_buffers on
GstBuffers that have not been allocated inside the GstWaylandBufferPool,
so that if for example an upstream element is sending us a buffer
from a different pool, which however does not need to be copied
to a buffer from our pool because it may be a hardware buffer
(hello dmabuf!), we can create a wl_buffer directly from it and first,
attach it on it so that we don't have to re-create a wl_buffer every
time the same GstBuffer arrives and second, force the whole mechanism
for keeping the buffer out of the pool until there is a wl_buffer::release
on that foreign GstBuffer.
Header will be read each and everytime parse function will be called
which is not necessary since until we have complete data,
we need not parse the header again.
https://bugzilla.gnome.org/show_bug.cgi?id=737984
In gst_hls_demux_get_next_fragment() the next fragment URI gets
stored in next_fragment_uri, but the gst_hls_demux_updates_loop()
can at any time update the playlist, rendering this string invalid.
Therefore, any data (like key, iv, URIs) that is taken from a
GstM3U8Client needs to be copied. In addition, accessing the
internals of a GstM3U8Client requires locking.
https://bugzilla.gnome.org/show_bug.cgi?id=737793
As openh264 has no way to attach any IDs to input frames that we then get on
the output frames, we have to assume that the input has valid PTS. We just
take the frame with the oldest PTS, and if there is no PTS information we take
the one with the oldest DTS.
- update for shaders
- add alpha property
- image placement properties shamelessly borrowed from gdkpixbufoverlay
- image placement properties are GstController able
- use GstGLMemory for the overlay image data
- add support for gles2
Otherwise we could pass on a RGBA formatted buffer and downstream would
misinterpret that as some other video format.
Fixes pipelines of the form
gleffects ! tee ! xvimagesink
Allows callers to properly reference count the buffers used for
rendering.
Fixes a redraw race in glimagesink where the previous buffer
(the one used for redraw operations) is freed as soon as the next
buffer is uploaded.
1. glimagesink uploads in _prepare() to texture n
1.1 glupload holds buffer n
2. glimagesink _render()s texture n
3. glimagesink uploads texture n+1
3.1 glupload free previous buffer which deletes texture n
3.2 glupload holds buffer n+1
4. glwindow resize/expose
5. glimagesink redraws with texture n
The race is that the buffer n (the one used for redrawing) is freed as soon as
the buffer n+1 arrives. There could be any amount of time and number of
redraws between this event and when buffer n+1 is actually rendered and thus
replaces buffer n as the redraw source.
https://bugzilla.gnome.org/show_bug.cgi?id=736740
If EOS or ERROR happens before the download loop thread has reached its
g_cond_wait() call, then the g_cond_signal doesn't have any effect and
the download loop thread stucks later.
https://bugzilla.gnome.org/show_bug.cgi?id=735663
If EOS or ERROR happens before the download loop thread has reached its
g_cond_wait() call, then the g_cond_signal doesn't have any effect and
the download loop thread stucks later.
https://bugzilla.gnome.org/show_bug.cgi?id=735663
If EOS or ERROR happens before the download loop thread has reached its
g_cond_wait() call, then the g_cond_signal doesn't have any effect and
the download loop thread stucks later.
https://bugzilla.gnome.org/show_bug.cgi?id=735663
The internal pad still keeps its EOS flag and event as it can be assigned
after the flush-start/stop pair is sent. The EOS is assigned from the streaming
thread so this is racy.
To be sure to clear it, it has to be done after setting the source to READY to
be sure that its streaming thread isn't running.
https://bugzilla.gnome.org/show_bug.cgi?id=736012
The internal pad still keeps its EOS flag and event as it can be assigned
after the flush-start/stop pair is sent. The EOS is assigned from the streaming
thread so this is racy.
To be sure to clear it, it has to be done after setting the source to READY to
be sure that its streaming thread isn't running.
https://bugzilla.gnome.org/show_bug.cgi?id=736012
The internal pad still keeps its EOS flag and event as it can be assigned
after the flush-start/stop pair is sent. The EOS is assigned from the streaming
thread so this is racy.
To be sure to clear it, it has to be done after setting the source to READY to
be sure that its streaming thread isn't running.
https://bugzilla.gnome.org/show_bug.cgi?id=736012
packetized mode is being set when framerate is being set
which is not correct. Changing the same by checking the
input segement format. If input segment is in TIME it is
Packetized, and if it is in BYTES it is not.
https://bugzilla.gnome.org/show_bug.cgi?id=736252
Previously we only refetched the playlist if downloading a fragment
has failed once. We should also do that if it failed a second or third time,
chances are that the playlist was updated now and contains new URIs.
face detection will be performed only if image standard deviation is
greater that min-stddev. Default min-stddev is 0 for backward
compatibility. This property will avoid to perform face detection on
images with little changes improving cpu usage and reducing false
positives
https://bugzilla.gnome.org/show_bug.cgi?id=730510
* aspect should not be 0 on init
* rename fovy to fov
* add mvp to properties as boxed graphene type
* fix transformation order. scale first
* clear color with 1.0 alpha
https://bugzilla.gnome.org/show_bug.cgi?id=734223
If the language is not specified in the AdaptationSet, use the ContentComponent
node to get it. We only get it if there is only a single ContentComponent as
it doesn't seem clear on what to do if there are multiple entries
https://bugzilla.gnome.org/show_bug.cgi?id=732237
Dynamic pipelines that get and release the sink pads will finalize
the pad without going through gst_gl_mixer_stop() which is where the
upload object is usually freed. Don't leak objects in such case.
Instead always use the low bandwith playlist making things go smoother
as the current heuristic is rather set for normal playback, and
currently it does not behave properly.
https://bugzilla.gnome.org/show_bug.cgi?id=734445
When a seek with a negative rate is requested, find the target
segment where gstsegment.stop belongs in and then download from
this segment backwards until the first segment.
This allows proper reverse playback.
If window is resized, GstStructure pointer values have to be rescaled to
original geometry. A get_surface_dimensions GLWindow class method is added for
this purpose and used in the navigation send_event function.
https://bugzilla.gnome.org/show_bug.cgi?id=703486
When flushing, this will prevent dashdemux from trying to download more
fragments or more chunks of the same fragment before stopping.
Also improves the error handling to not transform everything non-ok into
an error.
https://bugzilla.gnome.org/show_bug.cgi?id=734014
templatematch operates on BGR data. In fact, OpenCV's IplImage always
stores color image data in BGR order -- this isn't documented at all in
the OpenCV source code, but there are hints around the web (see for
example
http://www.cs.iit.edu/~agam/cs512/lect-notes/opencv-intro/opencv-intro.html#SECTION00041000000000000000
and http://www.comp.leeds.ac.uk/vision/opencv/iplimage.html ).
gst_templatematch_load_template loads the template (the image to find)
from disk using OpenCV's cvLoadImage, so it is stored in an IplImage in
BGR order. But in gst_templatematch_chain, no OpenCV conversion
functions are used: the imageData pointer of the IplImage for the video
frame (the image to search in) is just set to point to the raw buffer
data. Without this fix, that raw data is in RGB order, so the call to
cvMatchTemplate ends up comparing the template's Blue channel against
the frame's Red channel, producing very poor results.
Previously changing the template property resulted in an exception
thrown from cvMatchTemplate, because "dist_image" (the intermediate
match-certainty-distribution) was the wrong size (because the
template image size had changed).
Locking has also been added to allow changing the properties (e.g. the
pattern to match) while the pipeline is playing.
* gst_element_post_message is moved outside of the lock, because it will
call into arbitrary user code (otherwise, if that user code calls into
gst_templatematch_set_property on this same thread it would deadlock).
* gst_template_match_load_template: If we fail to load the new template
we still unload the previous template, so this element becomes a no-op
in the pipeline. The alternative would be to keep the previous template;
I believe unloading the previous template is a better choice, because it
is consistent with the state this element would be in if it fails to
load the very first template at start-up.
Thanks to Will Manley for the bulk of this work; any errors are probably
mine.
The early return was bypassing the call to gst_pad_push. With no
filter->template (and thus no filter->cvTemplateImage) the rest of this
function is essentially a no-op (except for the call to gst_pad_push).
This (plus the previous commit) allows templatematch to be
enabled/disabled without removing it entirely from the pipeline, by
setting/unsetting the template property.
Delaying the segment event to when caps are decided can cause issues as
the first thing katedec does on its chain function it doing a segment clip.
It will lead to an assertion if the segment format is undefined
https://bugzilla.gnome.org/show_bug.cgi?id=733226
Properly handle the caps event by configuring the kate decoding lib using the
available streamheaders. This makes it possible to decode kate subtitles when
the stream is seeked before katedec gets the initial buffers that are usually
the streamheaders.
https://bugzilla.gnome.org/show_bug.cgi?id=733226
The headers were never getting reffed when being added to the headers
list, which is later unreffed-and-freed by the caller (e.g.
gst_opus_parse_parse_frame()).
https://bugzilla.gnome.org/show_bug.cgi?id=733013
The expected default behaviour for video sink is to maintain the
aspect ratio. Fix the default value to reflect this. The property
default was already TRUE, but the value was not initially TRUE.
First this is handle by base transform, hence this is a no-op, and if it wasn't it
would lead to a buffer copy being leaked, and then an unreffed buffer being
pushed downstream.
https://bugzilla.gnome.org/show_bug.cgi?id=732756
OpenNI2 makes no guarantees of timestamp starting from zero, just that
it will be a millisecond timestamp. Make timestamps start from zero
manually so things work correctly.
https://bugzilla.gnome.org/show_bug.cgi?id=732535
Allows automatic negotiation of the size in the following case:
gst-launch-1.0 glvideomixer name=m sink_0::xpos=0 sink_1::xpos=320 ! glimagesink \
videotestsrc ! m. \
videotestsrc pattern=1 ! m.
https://bugzilla.gnome.org/show_bug.cgi?id=731878
This is too allow gst-launch debugging with multiple GL contexts as
well as avoiding segfaulting innocent gtk+ apps that have not called
XInitThreads.
https://bugzilla.gnome.org/show_bug.cgi?id=731525
Only reset the decryption engine on the first buffer of a fragment,
not again for the second buffer. This fixes corrupting the second
buffer of a fragment.
https://bugzilla.gnome.org/show_bug.cgi?id=731968
gstwaylandsink.c:480:14: error: comparison of constant -1 with expression of
type 'enum wl_shm_format' is always false
[-Werror,-Wtautological-constant-out-of-range-compare]
if (format == -1)
~~~~~~ ^ ~~
This allows waylandsink to fail gracefully before going to READY
in case one of the required interfaces does not exist. Not all
interfaces are necessary for all modes of operation, but it is
better imho to fail before going to READY if at least one feature
is not supported, than to fail and/or crash at some later point.
In the future we may want to relax this restriction and allow certain
interfaces not to be present under certain circumstances, for example
if there is an alternative similar interface available (for instance,
xdg_shell instead of wl_shell), but for now let's require them all.
Weston supports them all, which is enough for us now. Other compositors
should really implement them if they don't already. I don't like the
idea of supporting many different compositors with different sets of
interfaces implemented. wl_subcompositor, wl_shm and wl_scaler are
really essential for having a nice video sink. Enough said.
This essentially hides the video and allows the application to
potentially draw a black background or whatever else it wants.
This allows to differentiate the "paused" and "stopped" modes
from the user's point of view.
Also reworded a comment there to make my thinking more clear,
since the "reason for keeping the display around" is not really
the exposed() calls, as there is no buffer shown in READY/NULL
anymore.
1) We know that gst_wayland_sink_render() will commit the surface
in the same thread a little later, as gst_wl_window_set_video_info()
is always called from there, so we can save the compositor from
some extra calculations.
2) We should not commit a resize with the new video info while we are still
showing the buffer of the previous video, with the old caps, as that
would probably be a visible resize glitch.
Previously, in order to change the surface size we had to let the pipeline
redraw it, which at first also involved re-negotiating caps, etc, so a
synchronization with the pipeline was absolutely necessary.
At the moment, we are using wl_viewport, which separates the surface size
from the buffer size and it also allows us to commit a surface resize without
attaching a new buffer, so it is enough to just do:
gst_wayland_video_pause_rendering():
wl_subsurface_set_sync()
gst_video_overlay_set_render_rectangle():
wl_subsurface_set_position()
wl_viewport_set_destination()
wl_surface_damage()
wl_surface_commit()
... commit the parent surface ...
gst_wayland_video_resume_rendering():
wl_subsurface_set_desync()
This is enough to synchronize a surface resize and the pipeline can continue
drawing independently. Now of course, the names pause/resume_rendering are
bad. I will rename them in another commit.
Access is protected only for setting/creating/destroying the display
handle. set_caps() for example is not protected because it cannot be
called before changing state to READY, at which point there will be
a display handle available and which cannot change by any thread at
that point
This is because:
* GST_ELEMENT_WARNING/ERROR do lock the OBJECT_LOCK and we deadlock instantly
* In future commits I want to make use of GstBaseSink functions that also
lock the OBJECT_LOCK inside this code
* own_surface is not needed anymore
* gst_wl_window_from_surface is not used externally anymore
* many initializations to 0 are not needed (GObject does them)
This means that the given surface in set_window_handle can now be
the window's top-level surface on top of which waylandsink creates
its own subsurface for rendering the video.
This has many advantages:
* We can maintain aspect ratio by overlaying the subsurface in
the center of the given area and fill the parent surface's area
black in case we need to draw borders (instead of adding another
subsurface inside the subsurface given from the application,
so, less subsurfaces)
* We can more easily support toolkits without subsurfaces (see gtk)
* We can get properly use gst_video_overlay_set_render_rectangle
as our api to set the video area size from the application and
therefore remove gst_wayland_video_set_surface_size.
This drops the ugly GstWaylandWindowHandle structure and is much
more elegant because we can now request the display separately
from the window handle. Therefore the window handle can be requested
in render(), i.e. when it is really needed and we can still open
the correct display for getting caps and creating the pool earlier.
This change also separates setting the wl_surface from setting its size.
Applications should do that by calling two functions in sequence:
gst_video_overlay_set_window_handle (overlay, surface);
gst_wayland_video_set_surface_size (overlay, w, h);
This is the only way to get the negotiation working with the dynamic
detection of formats from the display, because the pipeline needs
to know the supported formats in the READY state and the supported
formats can only be known if we open the display.
Unfortunately,in wayland we cannot have a separate connection to
the display from the rest of the application, so we need to ask for a
window handle when going to READY in order to get the display from it.
And since it's too early to create a top level window from the state
change to READY, create it in render() when there is no other window.
This also changes set_window_handle() to not support window handle
changes in PAUSED/PLAYING (because it's complex to handle and useless
in practice) and make sure that there is always a valid display pointer
around in the READY state.
This fixes weird freezes because of frame_redraw_callback() not being
called from the main thread when it should with weston's toy toolkit.
It's also safer to know that frame_redraw_callback() will always be
called from our display thread... Otherwise it could be called after
the sink has been destroyed for example.