According to the MPEG-DASH spec, certain elements (i.e.
SegmentBase, SegmentTemplate, and SegmentList) should inherit
attributes from the same elements in the containing AdaptationSet
or Period.
Updated the SegmentBase, SegmentTemplate, and SegmentList parsers
to properly inherit attributes from the corresponding elements in
AdaptationSet and/or Period.
https://bugzilla.gnome.org/show_bug.cgi?id=702677
Convert all xml attribute/content parsing functions to return a
boolean value indicating whether or not the attribute/content was
present. We need this finer-grained control in order to properly
implement the inheritance policies described in the spec
Also fixed several memory leak conditions when handling errors in
the xml attribute/content parsing functions.
https://bugzilla.gnome.org/show_bug.cgi?id=702677
Ensure that g_free/xmlFree is used correctly based on how the
memory was allocated.
When deallocating GLists, there were many places that were using
g_list_foreach and g_list_free. Converted these occurrences to
call g_list_free_full.
Add NULL checks to all xmlFree calls since the documentation does
not guarantee that passing NULL is safe
In places where we are strdup'ing memory allocated by libxml2,
changed those calls to use xmlMemStrdup().
There were several places where we were missing g_slice_free when
deallocating a top-level node structure.
https://bugzilla.gnome.org/show_bug.cgi?id=702837
Wayland interface could offer two buffers pixels formats: WL_SHM_FORMAT_XRGB8888 and WL_SHM_FORMAT_ARGB8888.
Update waylandsink to support them and check if the format is really available.
https://bugzilla.gnome.org/show_bug.cgi?id=702112
Fixes:
In file included from gstsegmentation.h:51:0,
from gstopencv.c:42:
/usr/include/opencv2/video/background_segm.hpp:47:16: fatal error: list:
No such file or directory
#include <list>
^
compilation terminated.
https://bugzilla.gnome.org/show_bug.cgi?id=702297
It was not properly divided by GST_SECONDS. Also fix issue with
max-buffering-time being multiplied by GST_SECONDS every time the
property is retrieved.
https://bugzilla.gnome.org/show_bug.cgi?id=700487
Split the introspection and registration part. This way we only need to open all
plugins when updating the registry. When reading the registry we can register
the elements entierly from the cache.
Add colour image enhancement element based on Retinex algorithm. Two types
exist, namely basic and multiscale; both are described in this article:
Rahman, Zia-ur, Daniel J. Jobson, and Glenn A. Woodell. "Multi-scale retinex
for color image enhancement." Image Processing, 1996. Proceedings.,
International Conference on. Vol. 3. IEEE, 1996
Visually speaking the result looks a bit funny, but is pretty invariable to
lightning changes, which is good for some applications, like image
segmentation.
https://bugzilla.gnome.org/show_bug.cgi?id=700977
WmaPro is actually wmaversion 3, and can also be found by the
WMAP fourcc.
Some manifests also contain the block_align field as "PacketSize"
in the audio track description, the libav decoders require it
to be present in caps.
Fixes#699921
Detect when the eagl surface changed its dimension (when the user rotates
the device for example) and adapt the egl internals to draw to that,
preventing that ios resizes the image again when drawing.
This is particularly harmful when eagl would scale down a image
to draw and the ios screen would scale it back up because the
surface is now bigger than when the element was configured.
wma v2 expects block_align, channels and rate fields set to its caps.
This isn't present direclty on the manifests, so mssdemux should parse
it from the waveformatex structure
https://bugzilla.gnome.org/show_bug.cgi?id=699924
bitrate info is always present on the QualityLevel xml node as part
of the adaptive selection processing, put it into caps as some
decoders require it (avdec_wmav2 for example)
https://bugzilla.gnome.org/show_bug.cgi?id=699924
It's not developed any more and replaced by the
libschroedinger-based elements in gst-plugins-good.
(The libschroedinger 1.0.9 release notes state "This
is an exciting release: most of the encoding tools in
dirac-research have been ported over to Schrödinger, so
now schro has the same or better compression efficiency
as dirac-research.")
TRM IDs are MusicBrainz' old audio fingerprinting system from
Relatable, they were phased out in favour of MusicIPs PUIDs.
https://wiki.musicbrainz.org/History:TRM
In some scenarios, for example in QtWebKit, might be difficult to obtain full
control on the egl display and it might be only accessible indirectly via
eglGetCurrentDisplay().
https://bugzilla.gnome.org/show_bug.cgi?id=700058
We only want to adjust the timestamps so that they start from 0 for live
streams. Non-live streams already start from 0 and after a seek we actually want
to timestamp to be the position we seek to.
Non-live streams should timestamp buffers with a running-time starting from
0. Since we already push a 0 -> -1 segment, bring the timestamps to 0
by subtracting the initial timestamp.
The xmlCleanupParser function seems to cleanup all statically
allocated libxml variables, making it unusable. We can't guarantee
that dashdemux won't need it anymore, so better not call it.
Manifest updates should be done periodically for live streams,
this patch makes the demuxer create a new manifest client for
the new version and transfers the stream position to the new
one, discarding the old one afterwards.
A small struct that keeps a short history of fragment download bitrates
to have an average measure of N last fragments instead of using only
the last downloaded bitrate
Do not use a global bitrate as the sizes of the fragments matter
when calculating the download rate as the connection setup time is
also being taken into the download duration, a smaller fragment
will have a lower bitrate than a larger one.
This avoids switching the bitrates for streams frequently because
of bitrate mismatches
Instead of downloading 1 fragment per stream per download loop,
select the stream with the earlier timestamp and get a fragment
only for that one.
The old algorithm would lead to problems when the fragment durations
were too different for streams.
dashdemux shouldn't emit the buffering message as that can pause
the pipeline. It has no proper knowledge of the downstream buffering
status so it can pause the pipeline when it isn't necessary. It should
have an internal buffer for downloading the streams ahead of playback,
but that shouldn't make it able to stop the pipeline for buffering.
A particular case in which this is bad is when a pad switch happens
(changing bitrates for example), the new pads dashdemux creates
will get linked to demuxers and new queues will be created,
these queues are initially empty and dashdemux will quickly
drain its buffers by pushing them to those queues. So it
would have no more buffers internally and would emit a
buffering message with a low ratio, causing the pipeline
to pause when it wouldn't be necessary.
Put EOS on the streams queues after the last fragment from the
last period for each stream. This way we keep it serialized
with the buffers and it will work when streams have different
ending times
The smallest queue should be used to prevent blocking the download
thread when a stream has too much data buffered, leaving the other
streams starving from fragments
Each stream has its own durations and timestamps, the fragment number
is different for each stream when seeking, so the seek has to be done
for all streams, rather than on a single stream and propagated to
others
GstDataQueue has proper locking and provides functions to limit the
size of the queue. Also has blocking calls that are useful to
our multithread scenario in Dash.
Store the buffers separately for each stream, this is clearer than
having a queue with a list of buffers. It also allows easier selection
of buffers to push in later refactors
Fragments should be pushed ASAP as downstream should be responsible for
doing the syncrhonization and proper buffering.
This has the great side effect of fixing most of the seeking A/V sync issues.
- the MPD file is updated in the download loop (only if we have a "dynamic" MPD and minimumUpdatePeriod is valid);
- properly LOCK/UNLOCK the GstMpdClient;
This fixes conflicts with the HLS plugin, which is also named
fragmented.
When building its registry, gstreamer was picking one or the other
between hls and dashdemux.
This fixes build that has been broken by commit
fb9aeac6552021b176a4c4bd07265e02a0b70e0f.
gst_mpd_client_get_target_duration has been removed, and
gst_mpd_client_get_next_fragment_duration should be used instead.
This was necessary to support variable-duration Fragments.
in the new API:
- gst_mpd_client_get_current_position returns the timestamp of the NEXT fragment to download;
- gst_mpd_client_get_next_fragment_duration returns the duration of the next fragment to download;
- gst_mpd_client_get_media_presentation_duration returns the mediaPresentationDuration from the MPD file;
also there is a new internal parser function:
- gst_mpd_client_get_segment_duration extracts the constant segment duration from the MPD file
(only used when there is no SegmentTimeline syntax element in the current representation)
In gst_mpd_client_get_next_fragment, we set the timestamp/duration of the fragment just downloaded
copying the values from the corresponding GstMediaSegment.
TODO: rework SEEKING to support seeking across different Periods.
- Periods are played in sequence, from PeriodStart to PeriodEnd
- seamless switching from one Period to the next one works fine;
- the 'new-segment' generation is broken, so if we need to switch pads for a new Period there is a crash;
- build a list of the available Periods with their start and duration time
- add the list of GstStreamPeriod in the GstMpdClient data struct
- remove cur_period from GstMpdClient and introduce an API to get the current GstStreamPeriod
- several API clean-ups
build the list of segments to be played using the SegmentTimeline syntax, if present
bugfixes:
- for dynamic MPD files, when mediaPresentationDuration is not present use minimumUpdatePeriod instead
- do not add a spurious '$' when building an URL from a template like "$Bandwidth$/init.mp4v"
- introduce gst_mpd_client_add_media_segment() to avoid code duplication
other fixes:
- fixed a buffering bug: now we stop buffering when we reach the end of manifest
- now gst_mpd_client_get_target_duration() always returns a valid duration
(in case of single-segment streams, we return either Period duration or mediaPresentation duration)
TODO: support SegmentTimeline
SegmentList nodes are allowed into Period, AdaptationSet or Representation nodes
and there is at most 1 element, so no need to keep a list;
Period nodes cannot have any Represention elements, as AdaptationSet nodes are mandatory;
this breaks compatibility with some legacy DASH test sequences.
gstmpdparser.c: In function ‘gst_mpdparser_get_list_and_nb_of_audio_language’:
gstmpdparser.c:2891: warning: ‘return’ with no value, in function returning non-void
g_ascii_strtoull() returns a long long integer, but we need to
pass a normal int to gst_structure_set() for fields of G_TYPE_INT,
so cast appropriately.
The buffer parameter wasn't being used, it was only to signal if
a buffer was downloaded and advance to the next fragment in the
manifest.
Replace the buffer with a boolean that has the same effect and is
safer
connection setup times seem to matter when measuring the download
rate of different streams. Streams with longer fragments have a
*relatively* lower connection setup time and achieve higher bitrates.
Using the average seems unfair here, so use each stream's measured bitrate
to select its best quality option.
We need to cancel the downloader for each stream before joining the main download task, otherwise
the download task will block until all the stream tasks finish.
When the codec is AAC-LC, some server implementation (e.g. Microsoft IIS) doesn't add the CodecPrivateData attribute. The element needs to re-create the codec data from the Quality Level attributes (channels and sampling rate).
There is no way to know if a live stream is really finished, so try to reload the manifest and check if there are more fragments to download. Else just let know it's the EOS.
Live streams force the demuxer to keep reloading the Manifest from
time to time, as the new fragments are being added as they are recorded.
The demuxer should also try to keep up and detect when it had to skip
fragments, marking the discont flag when that happens.
Curiously, the spec doesn't seem to mention when/how a live stream is supposed
to end, so keep trying downloads until the demuxer errors out.
Use pad tasks to download data and an extra task that gets the earlier
buffer (with the smallest timestamp) and pushes on the corresponding
pad.
This prevents that the audio stream rushes ahead on buffers as its
fragments should be smaller
When connection-speed changes, signal that we might need a bitrate
switch. During the switch, a new pad group is added and the old one
is drained and removed.
New pads also need to push newsegments before starting to stream
This speed limits the maximum bitrate of streams. Currently it
is only read when starting the pipeline, but it should be used
for switching bitrates during playback to adapt to network
changes.
mssdemux should set the streams it has exposed as active so that
the manifest won't use the non-active streams to compute total bitrates
or providing fragments
When the stream can't have its caps detected, better not to expose it.
If no streams are known, signal an error about no playable streams to
the application
Adds basic handling for seek in time events. Needs to cooperate
with the downstream qtdemux so that it forwards the seeks and
the corresponding newsegments
This is important for downstream to properly timestamp the samples
The default value is 10000000, but this can be set in the stream
or at the top-level manifest entry
Keep a ref on pad to prevent it being unreffed while the mssdemux
streams are still using it. Also reset the element when going to
ready instead of when going to null.
Use shorter names for the MSS manifest helper structure and functions.
Also continues the implementation of the stream fetching and pushing loop.
Now it uses the base url correctly and already fetches and pushes the fragments
downstream
It seems EAGL expects the application to simply ignore unused
EAGL contexts as the resources for it would be released when a new
context is set as the current one. Also move the egl extensions
querying to after a context is set to prevent crashes.
This makes the EAGL version of eglglessink reusable.
gstegladaptation_egl.c: In function 'gst_egl_adaptation_create_native_window':
gstegladaptation_egl.c:868:3: error: format '%p' expects argument of type 'void *', but argument 8 has type 'EGLNativeWindowType' [-Werror=format=]
GST_DEBUG_OBJECT (ctx->element, "Using window handle %p", window);
^
Put EGL specific code to a separate file and create the same functions
for EAGL, the Apple's specific EGL implementaton.
At this point, the EAGL version wasn't compiled or tested as there isn't
any simple documented way to build 1.0 for iOS. This code for the EAGL
version is still the 0.10 version, some updates should be made when 1.0
is buildable for iOS.
gsteglglessink.c: In function 'gst_eglglessink_fill_texture':
gsteglglessink.c:1815:3: error: format '%d' expects argument of type 'int', but argument 11 has type 'gsize' [-Werror=format]
gsteglglessink.c: In function 'gst_eglglessink_configure_caps':
gsteglglessink.c:2850:3: error: format '%p' expects argument of type 'void *', but argument 8 has type 'EGLNativeWindowType' [-Werror=format]