Add a new timestamp mode that assumes the local and remote clock are
synchronized. It takes the first timestamp as a base time and then uses the RTP
timestamps for the output PTS.
matroska-demux.c: In function 'gst_matroska_demux_add_stream':
matroska-demux.c:1379:7: error: format '%u' expects argument of type 'unsigned int', but argument 4 has type 'guint64' [-Werror=format=]
"%03u", context->uid);
^
If the pipeline is stalled for too long, souphttpsrc will block and
stop fetching data from the network. This can cause the connection to
drop and souphttpsrc would handle it as an EOS. This patch makes it
persist and try to fetch more data until the end of the content length
or until receiving an error that it is beyong limits in case the content
is unknown.
https://bugzilla.gnome.org/show_bug.cgi?id=683536
WebM has a couple of specific requirements we need to handle.
Idea is to set this flag once and just rely on mux->is_webm
at run time instead of repeatedly figuring this out from
GST_MATROSKA_DOCTYPE_WEBM (which requires a strcmp()).
WebM spec states SegmentUID is Unsupported. Files produced
with gstreamer without this change will spit an error like
this when passed to mkvalidator:
ERR201: Invalid 'SegmentUID' for profile 'webm' in Info at 192
On some systems (E.G. uClibc and older Glibc versions), O_CLOEXEC is only
defined when _GNU_SOURCE is specified, so do so.
_GNU_SOURCE needs to be defined before any system headers are included,
so move the fcntl.h section up.
https://bugzilla.gnome.org/show_bug.cgi?id=709423
When flush-stop arrives before we process the result of the _push() in the
loop function, we might pause even though we are not flushing anymore. Fix this
race by waiting for the srcpad loop function to completely pause after doing the
flush-start.
Since jpegdec already parse the jpeg stream, the sink caps could be
relaxed. This will allow jpegdec to be selected in more case and in
particular when the jpeg typefinder does not find the width and height.
https://bugzilla.gnome.org/show_bug.cgi?id=709352
If we were not waiting for the missing seqnum when we insert the lost packet
event in the jitterbuffer, we end up not updating the next_seqnum and wait
forever for the lost packets to arrive. Instead, keep track of the amount of
packets contained by the jitterbuffer item and update the next expected
seqnum only after pushing the buffer/event. This makes sure we correctly handle
GAPS in the sequence numbers.
Doing so would be a regression over 1.0 and breaks the unit test.
However the result will be most likely unusable, so let's post
a warning message on the bus.
Use g_date_time seconds manipulation to allow to cover the quicktime
spec for creation_time. It uses seconds since 1904.
Both paths could be done using the generic approach of seconds since
1904 with GDateTime handling, but the first path using seconds from
1970 should be more commonly found and avoids a few objects creation and
ref/unref, so keep it there for performance.
Additionally, the code for handling seconds since 1970 changed from >
to >= because having 0 seconds since 1970 is also a valid case for that
path to handle.
https://bugzilla.gnome.org/show_bug.cgi?id=707975