i686-apple-darwin11-llvm-gcc-4.2
gstglmixer.h:43: error: redefinition of typedef ‘GstGLMixer’
gstglmixerpad.h:32: error: previous declaration of ‘GstGLMixer’ was here
gstglmixer.h:46: error: redefinition of typedef ‘GstGLMixerFrameData’
gstglmixerpad.h:33: error: previous declaration of ‘GstGLMixerFrameData’ was here
The graphene-1.0 part should not be in the source code. This directory
is part of the cflags include. This is similar to gstreamer-1.0/
directory. This break compilation if the include directory where
graphene is installed is not in your include path.
Bitrate-limit is already available in the baseclass and, even though
the bandwidth-usage name is better, hls and mss already used
bitrate-limit. This patch deprecates the bandwidth-usage and maps
it to the baseclass bitrate-limite.
Move the property from subclasses to adaptivedemux, it allows
selecing the percentage of the measured bitrate to be used when
selecting stream bitrates
Allow the playlist-length to accept '0' as a value, indicating
that no segment should be removed from the playlist. This allows
generating playlists to be used as VOD when complete.
Allows to set a bitrate directly instead of measuring it internally
based on the received chunks. The connection-speed was removed from
mssdemux and hlsdemux as it is now in the base class
By implementing get_live_seek_range.
As shown by :
gst-validate-1.0 playbin \
uri=http://dev-iplatforms.kw.bbc.co.uk/dash/news24-avc3/news24.php
This patch handles live seeking, by setting a live seek range
comprised between now - timeShiftBufferDepth and now.
The inteersting thing with this stream is that one can actually
ask fragments up to availabilityStartTime, but it seems quite clear
in the spec that content is only guaranteed to exist up to
timeShiftBufferDepth.
One can test live seeking this way :
gst-validate-1.0 playbin \
uri=http://dev-iplatforms.kw.bbc.co.uk/dash/news24-avc3/news24.php \
--set-scenario seek_back.scenario
with scenario being:
description, seek=true
seek, playback-time=position+5.0, start="position-600.0",
flags=accurate+flush
This example will play the stream, wait for five seconds, then seek back
to a position 10 minutes earlier.
https://bugzilla.gnome.org/show_bug.cgi?id=744362
Add parsed/framed=true to allow negotiation with some
muxers that required parsed input. Encoders already provide
parsed/framed output so it should say so in caps.
Some variables are not initialized in the constructor. It is highly unlikely
they are used before being set, but it is safer to initialize them.
CID #1197704
Allows finer grain decisions about formats and features at each
stage of the pipeline.
Also provide propose_allocation for glupload besed on the supported
methods.
Make GstGLMemory hold the texture target (tex_target) the texture it represents
(tex_id) is bound to. Modify gst_gl_memory_wrapped_texture and
gst_gl_download_perform_with_data to take the texture target as an argument.
This change is needed to support wrapping textures created outside libgstgl,
which might be bound to a target other than GL_TEXTURE_2D. For example on OSX
textures coming from VideoToolbox have target GL_TEXTURE_RECTANGLE.
With this change we still keep (and sometimes imply) GL_TEXTURE_2D as the
target of textures created with libgstgl.
API: modify GstGLMemory
API: modify gst_gl_memory_wrapped_texture
API: gst_gl_download_perform_with_data
Depending on the platform, it was only ever implemented to 1) set a
default surface size, 2) resize based on the video frame or 3) nothing.
Instead, provide a set_preferred_size () that elements/applications
can use to request a certain size which may be ignored for
videooverlay/other cases.
Add more power to the chunk_received function (renamed to data_received)
and also to the fragment_finish function.
The data_received function must parse/decrypt the data if necessary and
also push it using the new push_buffer function that is exposed now. The
default implementation gets data from the stream adapter (all available)
and pushes it.
The fragment_finish function must also advance the fragment. The default
implementation only advances the fragment.
This allows the subsegment handling in dashdemux to continuously download
the same file from the server instead of stopping at every subsegment
boundary and starting a new request
gstdashdemux.c:1330:13: error: implicit conversion from enumeration type 'enum _GstAdaptiveDemuxFlowReturn' to different enumeration type
'GstFlowReturn' [-Werror,-Wenum-conversion]
ret = GST_ADAPTIVE_DEMUX_FLOW_SUBSEGMENT_END;
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
gmyth seems to be unmaintained upstream, and no one has asked
for this to be ported for a very long time, so let's just
remove it. Neither debian nor Fedora seem to ship libgmyth
any longer, and in any case it's most likely deprecated by
the UPnP support in MythTV.
The segment start time is calculated as the offset into the current segment.
The old condition to detect the end of period (i.e. segment start time >
period start + period duration) failed when the period start was not 0 since
the segment start time does not take the period start time into account.
Fix this detection by only comparing the segment start to the period duration.
https://bugzilla.gnome.org/show_bug.cgi?id=733369
The ISOBMFF profile allows definind subsegments in a segment. At those
subsegment boundaries the client can switch from one representation to
another as they have aligned indexes.
To handle those the 'sidx' index is parsed from the stream and the
entries point to pts/offset of the samples in the stream. Knowing that
the entries are aligned in the different representation allows the client
to switch mid fragment. In this profile a single fragment is used per
representation and the subsegments are contained in this fragment.
To notify the superclass about the subsegment boundary the chunk_received
function returns a special flow return that indicates that. In this case,
the super class will check if a more suitable bitrate is available and will
change to the same subsegment in this new representation.
It also requires special handling of the position in the stream as the
fragment advancing is now done by incrementing the index of the subsegment.
It will only advance to the next fragment once all subsegments have been
downloaded.
https://bugzilla.gnome.org/show_bug.cgi?id=741248
The old code was using gst_caps_normalize() and was generally overly
complex. Simplify by picking sample rate and number of channels from
upstream and the sample format from the allowed caps. If the format caps
is a list of strins, just pick the first one. And if the srcpad isn't
linked yet, use the default format (S16).
https://bugzilla.gnome.org/show_bug.cgi?id=740195
Optimize loop by moving condition outside of it and reuse the
find_next_fragment function to check if there is next instead of
replicating the same loop
Duration queries can be done a few times per second and would cause
the segment list to be traversed for every one. Caching the duration
prevents that.
Variable hands is already checked to contain a value previously at the beginning
of the current block (in line 504). There is no need to check again. This is
logically dead code.
CID 1197693
The duration values in playlists are approximate only, and for
playlist versions 2 and older they are only rounded integer values.
They cannot be used to timestamp buffers. This resulted in playback
gaps and skips because the actual duration of fragments is slightly
different. The solution is to only set the pts of the very first
buffer processed, not for each fragment.
q->bitrate is a guint64, but G_TYPE_INT may read fewer bits
off the stack, and if we pass more then the NULL sentinel
may not be found at the right place, which in turn might
lead to crashes.
https://bugzilla.gnome.org/show_bug.cgi?id=741751
hlsdemux assumes that seeking is not allowed for live streams,
however seek is possible if there are sufficient fragments in the
manifest. For example the BBC have live streams that contain 2 hours
of fragments.
The seek code for both live and on-demand is common code. The
difference between them is that an offset has to be calculated
for the timecode of the first fragment in the live playlist.
When hlsdemux starts to play a live stream, the possible seek range
is between 0 and A seconds. After some time has passed, the beginning of
the stream will no longer be available in the playlist and the seek
range is between B and C seconds.
Seek range:
start 0 ........... A
later B ........... C
This commit adds code to keep a note of the B and C values
and the highest sequence number it has seen. Every time it updates the
media playlist, it walks the list of fragments, seeing if there is a
fragment with sequence number > highest_seen_sequence. If so, the values
of B and C are updated. The value of B is used when timestamping
buffers.
It also makes sure the seek range is never closer than three fragments
from the end of the playlist - see 6.3.3. "Playing the Playlist file"
of the HLS draft.
https://bugzilla.gnome.org/show_bug.cgi?id=725435
For small amounts some data might be mistyped and it would cause
the pipeline to fail. For example if you have AAC inside mpegts,
for small amounts, the AAC samples would cause the typefinder to
think it is AAC and not mpegts.
https://bugzilla.gnome.org/show_bug.cgi?id=736061
If typefind fails, check to see if the buffer is too short for typefind. If this is the case,
prepend the decrypted buffer to the pending buffer and try again the next time around.
https://bugzilla.gnome.org/show_bug.cgi?id=740458
Corrected the final boundary mechanism so that a final boundary is
added to each mail with multipart content that is sent,
not just to the last one.
https://bugzilla.gnome.org/show_bug.cgi?id=741553
This reverts commit 15394aa705.
The latest release (v1.1) does not have pkg-config support
yet, so this plugin won't be built with the latest release.
Cerbero uses the latest release, so this makes cerbero
builds fail, which expect the plugin to be built.
We can re-commit this once there's a release that includes
pkg-config support.
Rework reverse fragment traversing with repetition fields to prevent
NULL pointer deref and avoid never advancing a fragment as the variable
is unsigned and would always be non-negative.
CID #1257627
CID #1257628
Read the "r" attribute from fragments to support fragments nodes
that use repetition to have a shorter Manifest xml.
Instead of doing:
<c d="100" />
<c d="100" />
You can use:
<c d="100" r="2" />
According to the HLS spec the remainder of the line following
the comma on EXTINF tag is not required. This patch removes
the fake title and saves some bytes on the playlist.
https://bugzilla.gnome.org/show_bug.cgi?id=741096
A context can create a GLsync object that can be waited on in order
to ensure that GL resources created in one context are able to be
used in another shared context without any chance of reading invalid
data.
This meta would be placed on buffers that are known to cross from
one context to another. The receiving element would then wait
on the sync object to ensure that the data to be used is complete.
This gives more flexibility to the subclasses and permits to remove the
GstVideoAggregatorClass->disable_frame_conversion ugly API.
WARNING: This breaks the API as it removes the disable_frame_conversion
field
API:
+ GstVideoAggregatorClass->find_best_format
+ GstVideoAggregatorPadClass->set_format
+ GstVideoAggregatorPadClass->prepare_frame
+ GstVideoAggregatorPadClass->clean_frame
- GstVideoAggregatorClass->disable_frame_conversion
https://bugzilla.gnome.org/show_bug.cgi?id=740768
If we seek when media is in stop state, playback-test gives
critical error, since context of glimagesink is destroyed during stop.
But since context is not present, we need not handle send_event in glimagesink
Hence adding a condition to check if context is valid.
https://bugzilla.gnome.org/show_bug.cgi?id=740305
Otherwise e.g. videotestsrc ! openh264enc ! ... will drop every second frame
because otherwise the target bitrate can't be reached without loosing too
much quality.
gst_glimage_sink_handle_events can be called from the overlay interface and from
the main thread before GL is setup. Before this change, that would call
_ensure_gl_setup() and deadlock on OSX.
Change things so that it's always safe to call gst_glimage_sink_handle_events()
without stuff deadlocking.
Remove gst_glimage_sink_handle_events call in gst_glimage_sink_init. It was
unnecessary and when the element was instantiated from the main thread, caused a
deadlock in OSX creating the context (thread).
Both Firefox and Chrome uses OPUS as the encoding in their SDP.
Adding this now defacto standard name remove the need for special
case in SDP parsing code.
https://bugzilla.gnome.org/show_bug.cgi?id=737810
with force-aspect-ratio=true, if the width or height changed, the
viewport wasn't being updated to respect the new video width and height
until a resize occured.
Otherwise, it is only possible for the sink pads and the src pads to
have the exact same caps features. We can convert from any feature
to another feature so support that.
Otherwise, it is only possible for the sink pads and the src pads to
have the exact same caps features. We can convert from any feature
to another feature so support that.
Do not try to render a buffer that is already being rendered.
This happens typically during the initial rendering stage as the first
buffer is rendered twice: first by preroll(), then by render().
This commit avoids this assertion failure:
CRITICAL: gst_wayland_compositor_acquire_buffer: assertion
'meta->used_by_compositor == FALSE' failed
https://bugzilla.gnome.org/show_bug.cgi?id=738069
Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
If waylandsink is the owner of the display then it is in charge
of catching input events on the surface.
https://bugzilla.gnome.org/show_bug.cgi?id=733682
Signed-off-by: Tifaine Inguere <tifaine.inguere@st.com>
Reviewed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
There are two cases covered here:
1) The GstWlDisplay forces the release of the last buffer and the pool
gets destroyed in this context, which means it unregisters all the
other buffers from the GstWlDisplay as well and the display->buffers
hash table gets corrupted because it is iterating.
2) The pool and its buffers get destroyed concurrently from another
thread while GstWlDisplay is finalizing and many things get corrupted.
The main reason behind this is that when the video caps change and the video
subsurface needs to resize and change position, the wl_subsurface.set_position
call needs a commit in its parent in order to take effect. Previously,
the parent was the application's surface, over which there is no control.
Now, the parent is inside the sink, so we can commit it and change size smoothly.
As a side effect, this also allows the sink to draw its black borders on
its own, without the need for the application to do that. And another side
effect is that this can now allow resizing the sink when it is in top-level
mode and have it respect the aspect ratio.
Because we no longer have a custom buffer pool that holds a reference
to the display, there is no way for a cyclic reference to happen like
before, so we no longer need to explicitly call a function from the
display to release the wl_buffers.
However, the general mechanism of registering buffers to the display
and forcibly releasing them when the display is destroyed is still
needed to avoid potential memory leaks. The comment in wlbuffer.c
is updated to reflect the current situation.
This reduces the complexity of having a custom buffer pool, as
we don't really need it. We only need the custom allocation part.
And since the wl_buffer is no longer saved in a GstMeta, we can
create it and add it on the buffers in the sink's render()
function, which removes the reference cycle caused by the pool
holding a reference to the display and also allows more generic
scenarios (the allocator being used in another pool, or buffers
being allocated without a pool [if anything stupid does that]).
This commit also simplifies the propose_allocation() function,
which doesn't really need to do all these complicated checks,
since there is always a correct buffer pool available, created
in set_caps().
The other side effect of this commit is that a new wl_shm_pool
is now created for every GstMemory, which means that we use
as much shm memory as we actually need and no more. Previously,
the created wl_shm_pool would allocate space for 15 buffers, no
matter if they were being used or not.
This also removes the GstWlMeta and adds a wrapper class for wl_buffer
which is saved in the GstBuffer qdata instead of being a GstMeta.
The motivation behind this is mainly to allow attaching wl_buffers on
GstBuffers that have not been allocated inside the GstWaylandBufferPool,
so that if for example an upstream element is sending us a buffer
from a different pool, which however does not need to be copied
to a buffer from our pool because it may be a hardware buffer
(hello dmabuf!), we can create a wl_buffer directly from it and first,
attach it on it so that we don't have to re-create a wl_buffer every
time the same GstBuffer arrives and second, force the whole mechanism
for keeping the buffer out of the pool until there is a wl_buffer::release
on that foreign GstBuffer.
Header will be read each and everytime parse function will be called
which is not necessary since until we have complete data,
we need not parse the header again.
https://bugzilla.gnome.org/show_bug.cgi?id=737984