Only reset the decryption engine on the first buffer of a fragment,
not again for the second buffer. This fixes corrupting the second
buffer of a fragment.
https://bugzilla.gnome.org/show_bug.cgi?id=731968
gstwaylandsink.c:480:14: error: comparison of constant -1 with expression of
type 'enum wl_shm_format' is always false
[-Werror,-Wtautological-constant-out-of-range-compare]
if (format == -1)
~~~~~~ ^ ~~
This allows waylandsink to fail gracefully before going to READY
in case one of the required interfaces does not exist. Not all
interfaces are necessary for all modes of operation, but it is
better imho to fail before going to READY if at least one feature
is not supported, than to fail and/or crash at some later point.
In the future we may want to relax this restriction and allow certain
interfaces not to be present under certain circumstances, for example
if there is an alternative similar interface available (for instance,
xdg_shell instead of wl_shell), but for now let's require them all.
Weston supports them all, which is enough for us now. Other compositors
should really implement them if they don't already. I don't like the
idea of supporting many different compositors with different sets of
interfaces implemented. wl_subcompositor, wl_shm and wl_scaler are
really essential for having a nice video sink. Enough said.
This essentially hides the video and allows the application to
potentially draw a black background or whatever else it wants.
This allows to differentiate the "paused" and "stopped" modes
from the user's point of view.
Also reworded a comment there to make my thinking more clear,
since the "reason for keeping the display around" is not really
the exposed() calls, as there is no buffer shown in READY/NULL
anymore.
1) We know that gst_wayland_sink_render() will commit the surface
in the same thread a little later, as gst_wl_window_set_video_info()
is always called from there, so we can save the compositor from
some extra calculations.
2) We should not commit a resize with the new video info while we are still
showing the buffer of the previous video, with the old caps, as that
would probably be a visible resize glitch.
Previously, in order to change the surface size we had to let the pipeline
redraw it, which at first also involved re-negotiating caps, etc, so a
synchronization with the pipeline was absolutely necessary.
At the moment, we are using wl_viewport, which separates the surface size
from the buffer size and it also allows us to commit a surface resize without
attaching a new buffer, so it is enough to just do:
gst_wayland_video_pause_rendering():
wl_subsurface_set_sync()
gst_video_overlay_set_render_rectangle():
wl_subsurface_set_position()
wl_viewport_set_destination()
wl_surface_damage()
wl_surface_commit()
... commit the parent surface ...
gst_wayland_video_resume_rendering():
wl_subsurface_set_desync()
This is enough to synchronize a surface resize and the pipeline can continue
drawing independently. Now of course, the names pause/resume_rendering are
bad. I will rename them in another commit.
Access is protected only for setting/creating/destroying the display
handle. set_caps() for example is not protected because it cannot be
called before changing state to READY, at which point there will be
a display handle available and which cannot change by any thread at
that point
This is because:
* GST_ELEMENT_WARNING/ERROR do lock the OBJECT_LOCK and we deadlock instantly
* In future commits I want to make use of GstBaseSink functions that also
lock the OBJECT_LOCK inside this code
* own_surface is not needed anymore
* gst_wl_window_from_surface is not used externally anymore
* many initializations to 0 are not needed (GObject does them)
This means that the given surface in set_window_handle can now be
the window's top-level surface on top of which waylandsink creates
its own subsurface for rendering the video.
This has many advantages:
* We can maintain aspect ratio by overlaying the subsurface in
the center of the given area and fill the parent surface's area
black in case we need to draw borders (instead of adding another
subsurface inside the subsurface given from the application,
so, less subsurfaces)
* We can more easily support toolkits without subsurfaces (see gtk)
* We can get properly use gst_video_overlay_set_render_rectangle
as our api to set the video area size from the application and
therefore remove gst_wayland_video_set_surface_size.
This drops the ugly GstWaylandWindowHandle structure and is much
more elegant because we can now request the display separately
from the window handle. Therefore the window handle can be requested
in render(), i.e. when it is really needed and we can still open
the correct display for getting caps and creating the pool earlier.
This change also separates setting the wl_surface from setting its size.
Applications should do that by calling two functions in sequence:
gst_video_overlay_set_window_handle (overlay, surface);
gst_wayland_video_set_surface_size (overlay, w, h);
This is the only way to get the negotiation working with the dynamic
detection of formats from the display, because the pipeline needs
to know the supported formats in the READY state and the supported
formats can only be known if we open the display.
Unfortunately,in wayland we cannot have a separate connection to
the display from the rest of the application, so we need to ask for a
window handle when going to READY in order to get the display from it.
And since it's too early to create a top level window from the state
change to READY, create it in render() when there is no other window.
This also changes set_window_handle() to not support window handle
changes in PAUSED/PLAYING (because it's complex to handle and useless
in practice) and make sure that there is always a valid display pointer
around in the READY state.
This fixes weird freezes because of frame_redraw_callback() not being
called from the main thread when it should with weston's toy toolkit.
It's also safer to know that frame_redraw_callback() will always be
called from our display thread... Otherwise it could be called after
the sink has been destroyed for example.
We are not supposed to redraw until we receive a frame callback and this
is especially useful to avoid allocating too many buffers while the
window is not visible, because the compositor may not call wl_buffer.release
until the window becomes visible (ok, this is a wayland bug, but...).
This is achieved by adding an extra reference on the buffers, which does
not allow them to return to the pool. When they are released, this reference
is dropped.
The rest complexity of this patch (hash table, mutex, flag, explicit release calls)
merely exists to allow a safe, guaranteed and deadlock-free destruction sequence.
See the added comment on gstwaylandsink.c for details.
start() makes sure that the minimum ammount of buffers requested is allocated.
stop() makes sure that buffers are actually destroyed and prevents
filling the file system when resizing the surface a lot, because the
wayland-shm-* files will stay on the file system as long as the wl_buffers
created out of them are alive.
This is the initial implementation, without the GstVideoOverlay.expose()
method. It only implements using an external (sub)surface and resizing
it with GstWaylandVideo.
The reference to the sink is not really needed anyway in waylandpool,
what matters basically is that the display is active as long as the
pool is active, so we really want to reference the display object
instead of the sink.
* make use of GstBufferPool::start/stop functions to allocate/deallocate memory
* get rid of struct shm_pool and do all operations cleanly inside WaylandBufferPool
* store a GstVideoInfo during configuration instead of the width & height
and use the stride from the video info instead of hardcoding its value
The reshape property was never used.
Replace the draw property with a signal.
Based on patch by Mathieu Duponchelle <mathieu.duponchelle@epitech.eu>
https://bugzilla.gnome.org/show_bug.cgi?id=704507
This can happen if the playlists have moved due to the variant playlist
now being redirected to another target. This currently only works as long
as the referenced playlists don't change in relation to the variant
playlist, and the new location is purely due to a new path triggered by a
new redirection target of the variant playlist, or a new redirection
target of the playlist itself.
https://bugzilla.gnome.org/show_bug.cgi?id=731164
We add a new signal, get-rollover-counter, to the SRTP encoder. Given a
ssrc the signal will return the currently internal SRTP rollover counter
for the given stream.
For the SRTP decoder we have a new SRTP caps parameter "roc" that needs
to be set when a new SRTP stream is created for a given SSRC.
https://bugzilla.gnome.org/show_bug.cgi?id=726861
Expose one more libcurl option: CURLOPT_SSH_HOST_PUBLIC_KEY_MD5.
This allows authenticating the server by the MD5 fingerprint of
the server's public key.
https://bugzilla.gnome.org/show_bug.cgi?id=723167
The parsing function already frees the old value (if any), avoid a double
free by not freeing it before calling the function without setting the
pointer to NULL
Coverity ID: 1212178
The _parse_url function already frees the previous pointer, avoid
freeing it before without setting to null or we have a double free.
Coverity ID: 1212181
Coverity ID: 1212180
Coverity ID: 1212179
Refactor mssdemux to remove uridownloader to use an internal
source element which reduces startup latency and provides smaller
buffers for better buffering management downstream
data does not have to be freed at all here, it's a pointer to
an arbitrary position inside the current line. Also don't reuse
the data variable for anything else, that will cause crashes
in playlists that have the I-frame playlist URI followed by
other attributes.
CID 1212127
Set up a message handling function to be able to catch errors
from the source element and signal the cond to allow the download
loop to retry the download.
Instead, use a source element linked to a ghostpad to provide
smaller buffers and more granular control for downstream
buffering elements while also reducing startup latency
Only the first buffer of a fragment has its timestamp set, so only
update the segment.position when pushing those buffers to avoid
having GST_CLOCK_TIME_NONE set to the position
https://bugzilla.gnome.org/show_bug.cgi?id=729364
Otherwise we will never recover from previous errors, and especially
will never start again after a flushing seek if downstream returned
GST_FLOW_FLUSHING to us.
hlsdemux can't rely on the source to push flushes on a seek on ready
as that might not make sense. So always resort to flushing the
internal proxy pads by pushing flush events from the source's src pad.
Also as the seeking is not required anymore, only seek if there is
really a byte range to be used. And store a ref to the source's
src pad to avoid doing get_static_pad for every fragment.
In decryption scenario, a buffer is always stored to be sent later
to wait for more data or EOS to be able to strip the final bytes
if requested. In case an error hapenned this buffer can be ignored
and not pushed downstream.
Handle some more error cases:
1) When the source element fails to go to ready
2) When decryption fails
3) When there is no source to handle a specific URI
4) When the URI is invalid
Set up a message handling function to catch errors from the internal
source and store the last return code to identify error situations
when returning from a fragment download.
Also moves the duration increase to after the download when we
know if it was successful or not
When using the internal source, hlsdemux doesn't know the caps of
the input before adding the pad, so remove the arguments that would
use that as it is always NULL.
And use an specific flag to signal when a pad switch is required.
Using the discont flag is a bad idea now because when a fragment
download fails it will lead to exposing a pad group without any
data, causing decodebin to abort.
When receving EOS from the internal src, increase the current positon
by the fragment duration to allow correct restoring of download position
if the bitrate changes
Use the same properties as uridownloader to keep connections alive
between consecutive fragments downloads.
1) set keep-alive property to true
2) keep the element in READY instead of in NULL
Measure the download bitrate to be able to select
the best playlist.
As the buffers are directly pushed downstream and it
might block. The time is only measured from the download
until the pad push and it is started again after the push
returns.
Now the decryption is done buffer by buffer instead of on the
whole fragment at once. As it expects multiples of 16 bytes a
GstAdapter was added to properly chunk the buffers.
Also the last buffer must be resized depending on the value of the
last byte of the fragment, so hlsdemux always keeps a pending buffer
as it doesn't know if it is the last one yet
The GstElement is directly linked into a ghost pad and
its buffers are pushed as received downstream. This way the
buffers are small enough and not a whole fragment that usually
causes extra latency and makes buffering harder
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
Previously if the proxy server hostname was the empty string
curlhttpsink would never even set the libcurl option. For libcurl
however, having a proxy server hostname be the empty string means that
proxying should be disabled even if environment variables might be set.
Now with the restriction lifted, doing this is allowed.
https://bugzilla.gnome.org/show_bug.cgi?id=728960
rtcp_buffer_get_ssrc is called even with RTP buffers. this means we
might end up with an exception and not find any valid RTCP packet type
and thus hit GST_RTCP_TYPE_INVALID. we now take care of this.
https://bugzilla.gnome.org/show_bug.cgi?id=727512
This patch provides the basic infrastructure required for this.
Upload and Download has been ported to this.
Has the nice effect of allowing GstGLMemory to be our
refcounted texture object for any texture type (not just RGBA).
Should not lose any features/video formats.
But only add this for non-live playlists. For live playlists we already
have another thread that is periodically updating playlists.
Reason for this is that sometimes downloading a fragment can fail because
the URIs have changed or expired since last time.
Sequence numbers in different playlists are not guaranteed to be the same for the
same position, e.g. fragments could have different durations in different playlists.
In theory we should do exactly the same for live playlists, but unfortunately we can't
because doing this kind of seeking requires the complete playlist since we started
playback. For live playlists the server is however dropping fragments in the beginning
over time and we have no absolute time references.
The tag was dereferenced earier. From the libschroedinger code,
it's not obvious to see whether tag and frame would be NULL at
the same time. I think is likely that both will be non NULL
here, but that's not certain. Additional tests may be needed
to avoid dereferencing tag and/or frame, but what to do if
only one is NULL isn't obvious, as the _get_tag function does
transfer ownership so isn't undoable.
Coverity 1139850
When we'd see an unknown stream type, then a SDDS stream.
Then we'd get to the end of the switch with a NULL temp stream
pointer, and dereference it.
Coverity 1139708
Recent refactoring causes this code to be called with either a NULL
fragment, or a non NULL fragment. In the former case, we don't have
a buffer. In the latter case, the original code dealing with DISCONT
assumed the buffer was valid. Testing for a NULL buffer here thus
does not seem to change the intent, and fixes:
Coverity 1195147
Turns out there was the same issue as with subtitles.
There is space for a single audio stream, but up to 255
may be used based on a uint8_t value in a struct, which may
or may not be read from the (untrusted) data.
A comment in ifo_types.h says this value is either 0 or 1, so
we can ensure this here without drawbacks.
Coverity 1139585
There is space for a single subtitle stream, but up to 255
may be used based on a uint8_t value in a struct, which may
or may not be read from the (untrusted) data.
A comment in ifo_types.h says this value is either 0 or 1, so
we can ensure this here without drawbacks.
Coverity 1139586
There is a small chance that we might end up in the done step without
having any output available.
Furthermore, when going through not_ready, we need to ensure gst_buffer_unmap
has a properly initialized GstMapInfo.
CID #1139923
CID #1139924
CID #1139919
CID #1139920
gst_gl_context_create() might need to dispatch some operations to the
application's main thread, and calling this in the change_state function
can cause deadlocks.
* picked from old libgstegl:
- GstEGLImageMemory
- GstEGLImageAllocator
- last_buffer management from removed GstEGLImageBufferPool
* add-ons:
- GstEGLImageMemory now old a reference on GstGLContext
so that it can delete the EGLImage and its gltexture source
while having the associated gl context being current.
- add EGLImage support for GstVideoGLTextureUploadMeta which
mainly call EGLImageTargetTexture2D
- GstGLBufferPool now supports GstEGLImageAllocator
- glimagesink / glfilters / etc.. now propose GstEGLImageAllocator
to upstream
https://bugzilla.gnome.org/show_bug.cgi?id=703343
We create our textures (in Desktop GL) with GL_TEXTURE_RECTANGLE,
vaapi attempts to bind our texture to GL_TEXTURE_2D which throws a
GL_INVALID_OPERATION error and as thus, no video.
Also, by moving exclusively to GL_TEXTURE_2D and the npot extension
we also remove a difference between the Desktop GL and GLES2 code.
https://bugzilla.gnome.org/show_bug.cgi?id=712287
Fix bug #310775
gst-launch audiotestsrc ! libvisual_gl_projectM ! glimagesink is working
but for now you cannot append any other opengl filters between
libvisual_gl_projectM and glimagesink because our FBO is turned OFF.
It would require that libvisual allows to split rendering between
pass1,2,3... and final rendering. In order to unbind our FBO before
the passN, and then rebind it just before the final libvisual rendering.
hlsdemux causes a null pointer dereference if the media playlist
does not contain any media files. The gst_m3u8_client_get_duration
function assumes that demux->client->current->files is valid when
computing duration.
gst_m3u8_client_update needed to be modified to check for the
case of downloading an M3U8 file that doesn't contain any media
files, and returning an error to gsthlsdemux.c
This bug can be reproduced by creating a master m3u8 file that
contains one media playlist that points back to the master m3u8
file. For example create a file called bug725134.m3u8:
#EXTM3U
#EXT-X-VERSION:4
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=1251135, CODECS="avc1.42001f mp4a.40.2", RESOLUTION=640x352
bug725134.m3u8
https://bugzilla.gnome.org/show_bug.cgi?id=725134
hlsdemux does not check for the '"' character in #EXT-X-STREAM-INF
attributes. The CODECS parameter is an example of an attribute
that might use the '"' symbol and might contain a ',' character
inside this quoted string.
For example: CODECS="avc1.77.30, mp4a.40.2"
hlsdemux does not correctly parse the RESOLUTION attribute, it
assumes that an '=' character is used to delineate the width
and height values, but the HLS RFC states that a 'x' character
must be used as the delimiter between width and height.
https://bugzilla.gnome.org/show_bug.cgi?id=725140
...instead of adding them from the start of playlist every time. This
among other things fixes timestamps for live streams, where the playlist
is some kind of ringbuffer of fragments and thus adding from the beginning
of the playlist will miss the past fragments.
https://bugzilla.gnome.org/show_bug.cgi?id=724983
We now download fragments as fast as possible and push them downstream
while another thread is just responsible for updating live playlists
every now and then.
This simplifies the code a lot and together with the new buffering
mode for adaptive streams in multiqueue makes streams start much faster.
Also simplify threading a bit and hopefully make the GstTask usage safer.
Incorrect time scaling in gst_dash_demux_wait_for_fragment_to_be_available()
means that media segments are fetched before their availablity time. This
patch fixes this.
https://bugzilla.gnome.org/show_bug.cgi?id=724875
demux->last_manifest_update is not initialised at startup, with the
effect that live manifests are reloaded immediately after the download
loop begins. This patch fixes this.
https://bugzilla.gnome.org/show_bug.cgi?id=724790
And only afterwards wait until a fragment was played. Otherwise we're keeping
our cache most of the time at "fragments-cache" fragments minus one.
Also allow setting "fragments-cache" to 1 now to start playback even faster.
Use glib to get a list of system "share" directories, then go through that
list, appending 'sounds/sf2/' to each directory to get a soundfont directory,
and looking for .sf2 files there.
This way fluiddec is able to load sf2 files on W32, because otherwise the
path '/usr/share/sounds/sf2' makes no sense there.
Fixes#724013
nettle is used by newer versions of gnutls, while older versions of gnutls
used libgcrypt. Support both for now as not every distro has nettle yet.
nettle is preferred as it is more efficient to use and much smaller.
This will be incredible slow if the upstream block size is very small. Instead
continue scanning for the header where we previously stopped.
For the standard filesrc block-size this made decoding a file about
3 times faster.
https://bugzilla.gnome.org/show_bug.cgi?id=719890
Merge various changes and fixes from the master mpegdemux
Performance improvement from the way streams are organised,
return flow combining, language tag event generation,
adjustments and fixes in debug output, and things like that.
Previously faces would only be detected if they were at least 30x30 pixels
large and at most 32x32 pixels. We keep the minimum setting (maybe needs
a property as in facedetect) but disable the maximum feature size.
See https://bugzilla.gnome.org/show_bug.cgi?id=722158
This disables the "max feature size" feature. The current configuration
is totally busted: The max feature size is hard-coded to 2 pixels more
than the user-supplied min feature size which pretty much means you need
to guess the size of the person's face to within a few pixels to get the
code to find it.
https://bugzilla.gnome.org/show_bug.cgi?id=722158
Remove the dashdemux seeking function to use the one implemented
in mpdparser as it is more complete. This also makes dashdemux not
crash when seeking on streams that use segment templates.
1275 is the maximum size of a frame, but the encoder may return
up to 3 frames, and we need a few extra bytes for TOC, etc. We
use 4000, which is a bit more, and suggested in the libopus docs.
Download and push from the same task, makes code a lot simpler
to maintain. Also pushing from separate threads avoids deadlocking
when gst_pad_push blocks due to downstream queues being full.
Use a single lock for all streams instead of having separate locks.
This makes maintenance easier and at most points we would need
a single lock before iterating on all streams data. So not much
is gained from individual locks.
Make dash playlists with multiple periods work again by waiting
to switch the periods when all streams have reached the end of
the current period. The stream_loop is responsible for advancing
the period, but the download loops will already start downloading
data for the next period as soon as possible.
Handle multiple languages by using the not-linked return to stop
the download task for that stream. It can be reactivated when
a reconfigure event is received. Stopping the unused streams is
relevant to save network bandwidth
Instead of having a single download task for all streams, this
commit makes each stream have its own download loop, allowing
parallel download of fragments.
always expose all streams instead of only exposing one of each type.
This is more aligned with gstreamer's way of working. Allows the user
to select the stream that it wants to use by linking its pad and leaving
the unused ones as unlinked.
As streams now flow independently, the GstSegment needs to be put
on each stream so they can track the position of each one correctly
instead of being mixed in a single segment
Download and push from the same task, makes code a lot simpler
to maintain. Also pushing from separate threads avoids deadlocking
when gst_pad_push blocks due to downstream queues being full
When a stream gets a not-linked return, it will be marked as so and
won't download any more new fragments until a reconfigure event
is received. This will make mssdemux expose all pads, but only download
fragments for the streams that are actually being used.
Relying on the pads being linked/unlinked isn't enough in this scenario
as there might be an input-selector downstream that is actually discarding
buffers for a given linked pad.
When streams are switching, the old active stream can be blocked because
input-selector will block not-linked streams. In case the mssdemux's
stream loop is blocked pushing a buffer to a full queue downstream it will
never unblock as the queue will not drain (input-selector is blocking).
In this scenario, stream switching will deadlock as input-selector is
waiting for the newly active stream data and the stream_loop that would
push this data is blocked waiting for input-selector.
To solve this issue, whenever an stream is reactivated on a reconfigure
it will enter into the 'catch up mode', in this mode it can push buffers
from its download thread until it reaches the currrent GstSegment's position.
This works because this timestamp will always be behind or equal to the maximum
timestamp pushed for all streams, after pushing data for this timestamp,
the stream will go back to default and be pushed sequentially from the main
streaming thread. By this time, the input-selector should have already
released the thread.
https://bugzilla.gnome.org/show_bug.cgi?id=711849
* ext/srtp/gstsrtp.[ch]: added GST_SRTP_CIPHER_AES_256_ICM to
GstSrtpCipherType and new function cipher_key_size.
* ext/srtp/gstsrtpenc.c: maximum key size is now 46 characters (14 for
the salt plus the key). If different ciphers are chosen for RTP and
RTCP the maximum needed key size is expected.
* ext/srtp/gstsrtpdec.c: minor documentation updates.
https://bugzilla.gnome.org/show_bug.cgi?id=720434
Alternates between 33 and 32 byte frames, but must start
with a 33 byte frame. This has been broken for ages since
the element was ported to the audio decoder base class.
https://bugzilla.gnome.org/show_bug.cgi?id=709416
This currently converts from ARGB64_F16 (16 bit float per component)
to ARGB64 by clipping. We should add support for the F16 format and
implement a conversion filter element that can apply gamma curves,
change exposure, etc.
It only gets the sink flag set when it adds the multifilesink, that
happens in null->ready and it might be too late. Set the flag
explicitly on the constructor.
https://bugzilla.gnome.org/show_bug.cgi?id=711086
This patch fixes three memory leaks in hlsdemux, one that occurs
during normal operation and two that occur during error conditions.
The gst_hls_demux_get_next_fragment function calls
gst_fragment_get_buffer which increments the reference count
on the buffer but gst_hls_demux_get_next_fragment never calls unref on
the buffer. This means that the reference count for each downloaded
fragment never gets to zero and so its memory is never released.
This patch adds a call to gst_buffer_unref after the flags have been
updated on the buffer.
There is a leak-on-error in gst_hls_demux_decrypt_fragment if it fails
to download the key file. If the key fails to download, null is
returned without doing an unref on the encrypted fragment. The
semantics of gst_hls_demux_decrypt_fragment is that it takes ownership
of the encrypted fragment and releases it before returning.
There is a leak-on-error in gst_hls_src_buf_to_utf8_playlist in the
unlikely event that the gst_buffer_map fails. In the "happy path"
operation of gst_hls_src_buf_to_utf8_playlist the buffer gets an unref
before the function returns, therefore the error condition must do the
same.
https://bugzilla.gnome.org/show_bug.cgi?id=710881
Fixed up the error-handling code when downloading fragments.
Modifed the error-handling code to use positive logic when
testing for cancellation of the download loop.
https://bugzilla.gnome.org/show_bug.cgi?id=701404
There is an issue for live streams where download_loop will keep
downloading segments until it gets a 404 error for a segment
that has not yet been published. This is a problem because this
request for a segment that doesn't exist will propagate all the
way back to the origin server(s). This means that dashdemux causes
extra load on the origin server(s) for segments that aren't yet
available.
This patch uses availabilityStartTime, period
and the host's idea of UTC to decide if a fragment is available to
be requested from an HTTP server and filter out requests for fragments
that are not yet available.
https://bugzilla.gnome.org/show_bug.cgi?id=701404
On some live HLS streams, gst_hls_demux_switch_playlist causes
assertion failures because it tried to dereference a NULL fragment.
This is because g_queue_peek_tail sometimes was returning NULL and
this case was not being checked.
This patch does two things:
* move the g_queue_peek_tail inside the semaphore protection
* check if q_queue_peek_tail returns NULL
https://bugzilla.gnome.org/show_bug.cgi?id=708849
gstdashdemux.c:1753: warning: format '%llu' expects type 'long long unsigned int', but argument 8 has type 'long unsigned int'
gstdashdemux.c:2224: warning: format '%llu' expects type 'long long unsigned int', but argument 9 has type 'guint64'
gstdashdemux.c:2224: warning: format '%llu' expects type 'long long unsigned int', but argument 10 has type 'guint64'
gstmpdparser.h:530: warning: type qualifiers ignored on function return type
gstmpdparser.c:4177: warning: type qualifiers ignored on function return type
including the following supports and fixes:
* Create DirectFB surfaces from GstBufferPool
* Add NV12 pixel format support
* Don't use the cursor in the exclusive mode
- EnableCusor() can be only used when the administrative mode is set
in DirectFB 1.6.0 and later.
* Support multiple plane rendering for planar color formats
- This accommodates the chroma plane offsets of the framebuffer
in planar formats.
* Invoke SetConfiguration regardless of video mode setting in setcaps()
- SetConfiguration() method should be invoked regardless of
the result of gst_dfbvideosink_get_best_vmode(), since the two are
unrelated.
* Disable DirectFB signal handler
- "--dfb:no-sighandler" option is passed to DirectFBInit().
This prevents DirectFB from trying to kill the process and allows
GStreamer's termination sequence to proceed normally.
https://bugzilla.gnome.org/show_bug.cgi?id=703520
For SegmentTemplate elements containing a startNumber attribute, the
`number' member of GstMediaSegments should be offset by the value of
startNumber; however, this is not currently the case. As a result, the
first URI(s) requested by the download loop will be wrong.
This commit ensures that segment numbers will be offset by startNumber
when one is present in a SegmentTemplate element.
https://bugzilla.gnome.org/show_bug.cgi?id=705661
When using a SegmentTemplate element, the timestamps of the buffers
output by dashdemux are incorrect, causing problems downstream.
The reason is that GstMediaSegment start times are calculated (in
gst_mpdparser_get_chunk_by_index) by multiplying segment index by
segment duration and then scaling the result according the `timebase'
attribute from the MPD. However, the segment duration is already a
GstClockTime (i.e., it has already been scaled according to the timebase
from the MPD and converted to a nanosecond value), so multiplying it by
the segment index will give the correct timestamp without the need for
any further scaling.
https://bugzilla.gnome.org/show_bug.cgi?id=705679
This prevents locking on startup when a stream only has a single buffer
for one of the streams and mssdemux decides to push an EOS event right
after it.
This prevents deadlocks on startup on files that have only a very
large buffer for a stream and the queue is filled and will lock on
the eos event that is pushed after the buffer. As no buffers have yet
been pushed to other streams, the pipeline locks on preroll
Every encrypted fragment will be a multiple of 128 bits, the last byte
contains the number of bytes that were added as padding in the end
and should be removed.
https://bugzilla.gnome.org/show_bug.cgi?id=701673
When using an HLS encrypted stream, an assertion failure is thrown:
(gst-launch-1.0:31028): GLib-GObject-WARNING **: cannot register
existing type `GstFragment'
(gst-launch-1.0:31028): GLib-CRITICAL **: g_once_init_leave: assertion
`result != 0' failed
Eventually tracked this down to the call gst_fragment_new()
in function gst_hls_demux_decrypt_fragment.
The GstFragment class is defined in ext/hls/gstfragment.c and in
gst-libs/gst/uridownloader/gstfragment.c. Having two class definitions
with the same name causes the assert failure when trying to allocate
GstFragment. Deleting the version from hls and editing the
Makefile.am solves this assert failure.
https://bugzilla.gnome.org/show_bug.cgi?id=704555
During a live stream it is possible for dashdemux to lag behind on a
slow connection or to rush ahead of the connection os too fast.
For the first case it is necessary to jump some segments ahead to be able to
continue playback as old segments are usually deleted from the server.
For the later, dashdemux should wait a little before attempting another
download do give time to the server to produce a new segment
When using a template based segment list, do not try to
contruct a finite segment list for the limits of the available periods.
We might not know when the period ends (for live streams) and we can
always create the segment on demand when requested by dashdemux,
avoiding use of some memory and cpu when re-creating this list.
Replaces the 2 likely larger lists with more appropriate structures
to improve performance.
Replaces S nodes GList for a GQueue, this reduces latency to startup
because of traversing the list just append an element.
Replaces the processed media segments GList for a GPtrArray as it is
constantly acessed by index during playback.
Duration from segment being unknown is a issue from the MPD and not
a programming issue, so the assert isn't useful here. Instead check
and return an error code so the caller can fallback to alternatives
When dashdemux selects its first fragment, it always selects the
first fragment listed in the manifest. For on-demand content,
this is the correct behaviour. However for live content, this
behaviour is undesirable because the first fragment listed in the
manifest might be some considerable time behind "now".
The commit uses the host's idea of UTC and tries to find the
oldest fragment that contains samples for this time of day.
https://bugzilla.gnome.org/show_bug.cgi?id=701509
According to the MPEG-DASH spec, certain elements (i.e.
SegmentBase, SegmentTemplate, and SegmentList) should inherit
attributes from the same elements in the containing AdaptationSet
or Period.
Updated the SegmentBase, SegmentTemplate, and SegmentList parsers
to properly inherit attributes from the corresponding elements in
AdaptationSet and/or Period.
https://bugzilla.gnome.org/show_bug.cgi?id=702677
Convert all xml attribute/content parsing functions to return a
boolean value indicating whether or not the attribute/content was
present. We need this finer-grained control in order to properly
implement the inheritance policies described in the spec
Also fixed several memory leak conditions when handling errors in
the xml attribute/content parsing functions.
https://bugzilla.gnome.org/show_bug.cgi?id=702677
Ensure that g_free/xmlFree is used correctly based on how the
memory was allocated.
When deallocating GLists, there were many places that were using
g_list_foreach and g_list_free. Converted these occurrences to
call g_list_free_full.
Add NULL checks to all xmlFree calls since the documentation does
not guarantee that passing NULL is safe
In places where we are strdup'ing memory allocated by libxml2,
changed those calls to use xmlMemStrdup().
There were several places where we were missing g_slice_free when
deallocating a top-level node structure.
https://bugzilla.gnome.org/show_bug.cgi?id=702837
Wayland interface could offer two buffers pixels formats: WL_SHM_FORMAT_XRGB8888 and WL_SHM_FORMAT_ARGB8888.
Update waylandsink to support them and check if the format is really available.
https://bugzilla.gnome.org/show_bug.cgi?id=702112
Fixes:
In file included from gstsegmentation.h:51:0,
from gstopencv.c:42:
/usr/include/opencv2/video/background_segm.hpp:47:16: fatal error: list:
No such file or directory
#include <list>
^
compilation terminated.
https://bugzilla.gnome.org/show_bug.cgi?id=702297
It was not properly divided by GST_SECONDS. Also fix issue with
max-buffering-time being multiplied by GST_SECONDS every time the
property is retrieved.
https://bugzilla.gnome.org/show_bug.cgi?id=700487
Split the introspection and registration part. This way we only need to open all
plugins when updating the registry. When reading the registry we can register
the elements entierly from the cache.
Add colour image enhancement element based on Retinex algorithm. Two types
exist, namely basic and multiscale; both are described in this article:
Rahman, Zia-ur, Daniel J. Jobson, and Glenn A. Woodell. "Multi-scale retinex
for color image enhancement." Image Processing, 1996. Proceedings.,
International Conference on. Vol. 3. IEEE, 1996
Visually speaking the result looks a bit funny, but is pretty invariable to
lightning changes, which is good for some applications, like image
segmentation.
https://bugzilla.gnome.org/show_bug.cgi?id=700977
WmaPro is actually wmaversion 3, and can also be found by the
WMAP fourcc.
Some manifests also contain the block_align field as "PacketSize"
in the audio track description, the libav decoders require it
to be present in caps.
Fixes#699921
Detect when the eagl surface changed its dimension (when the user rotates
the device for example) and adapt the egl internals to draw to that,
preventing that ios resizes the image again when drawing.
This is particularly harmful when eagl would scale down a image
to draw and the ios screen would scale it back up because the
surface is now bigger than when the element was configured.
wma v2 expects block_align, channels and rate fields set to its caps.
This isn't present direclty on the manifests, so mssdemux should parse
it from the waveformatex structure
https://bugzilla.gnome.org/show_bug.cgi?id=699924
bitrate info is always present on the QualityLevel xml node as part
of the adaptive selection processing, put it into caps as some
decoders require it (avdec_wmav2 for example)
https://bugzilla.gnome.org/show_bug.cgi?id=699924
It's not developed any more and replaced by the
libschroedinger-based elements in gst-plugins-good.
(The libschroedinger 1.0.9 release notes state "This
is an exciting release: most of the encoding tools in
dirac-research have been ported over to Schrödinger, so
now schro has the same or better compression efficiency
as dirac-research.")
TRM IDs are MusicBrainz' old audio fingerprinting system from
Relatable, they were phased out in favour of MusicIPs PUIDs.
https://wiki.musicbrainz.org/History:TRM
In some scenarios, for example in QtWebKit, might be difficult to obtain full
control on the egl display and it might be only accessible indirectly via
eglGetCurrentDisplay().
https://bugzilla.gnome.org/show_bug.cgi?id=700058
We only want to adjust the timestamps so that they start from 0 for live
streams. Non-live streams already start from 0 and after a seek we actually want
to timestamp to be the position we seek to.
Non-live streams should timestamp buffers with a running-time starting from
0. Since we already push a 0 -> -1 segment, bring the timestamps to 0
by subtracting the initial timestamp.
The xmlCleanupParser function seems to cleanup all statically
allocated libxml variables, making it unusable. We can't guarantee
that dashdemux won't need it anymore, so better not call it.
Manifest updates should be done periodically for live streams,
this patch makes the demuxer create a new manifest client for
the new version and transfers the stream position to the new
one, discarding the old one afterwards.
A small struct that keeps a short history of fragment download bitrates
to have an average measure of N last fragments instead of using only
the last downloaded bitrate
Do not use a global bitrate as the sizes of the fragments matter
when calculating the download rate as the connection setup time is
also being taken into the download duration, a smaller fragment
will have a lower bitrate than a larger one.
This avoids switching the bitrates for streams frequently because
of bitrate mismatches
Instead of downloading 1 fragment per stream per download loop,
select the stream with the earlier timestamp and get a fragment
only for that one.
The old algorithm would lead to problems when the fragment durations
were too different for streams.
dashdemux shouldn't emit the buffering message as that can pause
the pipeline. It has no proper knowledge of the downstream buffering
status so it can pause the pipeline when it isn't necessary. It should
have an internal buffer for downloading the streams ahead of playback,
but that shouldn't make it able to stop the pipeline for buffering.
A particular case in which this is bad is when a pad switch happens
(changing bitrates for example), the new pads dashdemux creates
will get linked to demuxers and new queues will be created,
these queues are initially empty and dashdemux will quickly
drain its buffers by pushing them to those queues. So it
would have no more buffers internally and would emit a
buffering message with a low ratio, causing the pipeline
to pause when it wouldn't be necessary.
Put EOS on the streams queues after the last fragment from the
last period for each stream. This way we keep it serialized
with the buffers and it will work when streams have different
ending times
The smallest queue should be used to prevent blocking the download
thread when a stream has too much data buffered, leaving the other
streams starving from fragments
Each stream has its own durations and timestamps, the fragment number
is different for each stream when seeking, so the seek has to be done
for all streams, rather than on a single stream and propagated to
others
GstDataQueue has proper locking and provides functions to limit the
size of the queue. Also has blocking calls that are useful to
our multithread scenario in Dash.
Store the buffers separately for each stream, this is clearer than
having a queue with a list of buffers. It also allows easier selection
of buffers to push in later refactors
Fragments should be pushed ASAP as downstream should be responsible for
doing the syncrhonization and proper buffering.
This has the great side effect of fixing most of the seeking A/V sync issues.
- the MPD file is updated in the download loop (only if we have a "dynamic" MPD and minimumUpdatePeriod is valid);
- properly LOCK/UNLOCK the GstMpdClient;
This fixes conflicts with the HLS plugin, which is also named
fragmented.
When building its registry, gstreamer was picking one or the other
between hls and dashdemux.
This fixes build that has been broken by commit
fb9aeac6552021b176a4c4bd07265e02a0b70e0f.
gst_mpd_client_get_target_duration has been removed, and
gst_mpd_client_get_next_fragment_duration should be used instead.
This was necessary to support variable-duration Fragments.
in the new API:
- gst_mpd_client_get_current_position returns the timestamp of the NEXT fragment to download;
- gst_mpd_client_get_next_fragment_duration returns the duration of the next fragment to download;
- gst_mpd_client_get_media_presentation_duration returns the mediaPresentationDuration from the MPD file;
also there is a new internal parser function:
- gst_mpd_client_get_segment_duration extracts the constant segment duration from the MPD file
(only used when there is no SegmentTimeline syntax element in the current representation)
In gst_mpd_client_get_next_fragment, we set the timestamp/duration of the fragment just downloaded
copying the values from the corresponding GstMediaSegment.
TODO: rework SEEKING to support seeking across different Periods.
- Periods are played in sequence, from PeriodStart to PeriodEnd
- seamless switching from one Period to the next one works fine;
- the 'new-segment' generation is broken, so if we need to switch pads for a new Period there is a crash;
- build a list of the available Periods with their start and duration time
- add the list of GstStreamPeriod in the GstMpdClient data struct
- remove cur_period from GstMpdClient and introduce an API to get the current GstStreamPeriod
- several API clean-ups
build the list of segments to be played using the SegmentTimeline syntax, if present
bugfixes:
- for dynamic MPD files, when mediaPresentationDuration is not present use minimumUpdatePeriod instead
- do not add a spurious '$' when building an URL from a template like "$Bandwidth$/init.mp4v"
- introduce gst_mpd_client_add_media_segment() to avoid code duplication
other fixes:
- fixed a buffering bug: now we stop buffering when we reach the end of manifest
- now gst_mpd_client_get_target_duration() always returns a valid duration
(in case of single-segment streams, we return either Period duration or mediaPresentation duration)
TODO: support SegmentTimeline
SegmentList nodes are allowed into Period, AdaptationSet or Representation nodes
and there is at most 1 element, so no need to keep a list;
Period nodes cannot have any Represention elements, as AdaptationSet nodes are mandatory;
this breaks compatibility with some legacy DASH test sequences.
gstmpdparser.c: In function ‘gst_mpdparser_get_list_and_nb_of_audio_language’:
gstmpdparser.c:2891: warning: ‘return’ with no value, in function returning non-void
g_ascii_strtoull() returns a long long integer, but we need to
pass a normal int to gst_structure_set() for fields of G_TYPE_INT,
so cast appropriately.
The buffer parameter wasn't being used, it was only to signal if
a buffer was downloaded and advance to the next fragment in the
manifest.
Replace the buffer with a boolean that has the same effect and is
safer
connection setup times seem to matter when measuring the download
rate of different streams. Streams with longer fragments have a
*relatively* lower connection setup time and achieve higher bitrates.
Using the average seems unfair here, so use each stream's measured bitrate
to select its best quality option.
We need to cancel the downloader for each stream before joining the main download task, otherwise
the download task will block until all the stream tasks finish.
When the codec is AAC-LC, some server implementation (e.g. Microsoft IIS) doesn't add the CodecPrivateData attribute. The element needs to re-create the codec data from the Quality Level attributes (channels and sampling rate).
There is no way to know if a live stream is really finished, so try to reload the manifest and check if there are more fragments to download. Else just let know it's the EOS.
Live streams force the demuxer to keep reloading the Manifest from
time to time, as the new fragments are being added as they are recorded.
The demuxer should also try to keep up and detect when it had to skip
fragments, marking the discont flag when that happens.
Curiously, the spec doesn't seem to mention when/how a live stream is supposed
to end, so keep trying downloads until the demuxer errors out.
Use pad tasks to download data and an extra task that gets the earlier
buffer (with the smallest timestamp) and pushes on the corresponding
pad.
This prevents that the audio stream rushes ahead on buffers as its
fragments should be smaller
When connection-speed changes, signal that we might need a bitrate
switch. During the switch, a new pad group is added and the old one
is drained and removed.
New pads also need to push newsegments before starting to stream
This speed limits the maximum bitrate of streams. Currently it
is only read when starting the pipeline, but it should be used
for switching bitrates during playback to adapt to network
changes.
mssdemux should set the streams it has exposed as active so that
the manifest won't use the non-active streams to compute total bitrates
or providing fragments
When the stream can't have its caps detected, better not to expose it.
If no streams are known, signal an error about no playable streams to
the application
Adds basic handling for seek in time events. Needs to cooperate
with the downstream qtdemux so that it forwards the seeks and
the corresponding newsegments