Some of the subtitle chunks will have embedded
NUL-terminators (last three), some don't (first three),
some will have markup, some won't, some will be valid
UTF-8 (all but last), some won't (last stanza).
https://bugzilla.gnome.org/show_bug.cgi?id=752421
Need to check that the number of bytes we want to copy from the adapter
actually is available and handle the error case gracefully. This error
may happen if malformed packets are received and we don't have a
complete frame.
https://bugzilla.gnome.org/show_bug.cgi?id=752663
The subtitle buffer we push out should not include a NUL terminator
as part of the data, we just add such a terminator for safety, but
it should not be included in the buffer size.
A NUL terminator is not valid UTF-8, so checks will fail if it's
included in the size, and the NUL will be replaced by the fallback
character specified when converting, i.e. '*'.
https://bugzilla.gnome.org/show_bug.cgi?id=752421
In certain applications, splitting into files named after a base
location template and an incremental sequence number is not enough.
This signal gives more fine-grained control to the application to
decide how to name the files.
https://bugzilla.gnome.org/show_bug.cgi?id=750106
For 1ch or 2ch devices, we just need to set the caps to allow both
options since CoreAudio will up/downmix appropriately.
Also fixes the condition for the 2ch case to be exact, rather than at
least 2 channels since the downmix will not take place in the >stereo
case.
We need to initialize the AudioUnit early to be able to probe the
underlying device, but according to the AudioUnitInitialize() and
AudioUnitUninitialize() documentation, format changes should be done
while the AudioUnit is uninitialized. So we explicitly uninitialize the
AudioUnit during a format change and reinitialize it when we're done.
_audio_unit_property_listener is called either from a Core Audio thread
or as a result of a Core Audio API (e.g. AudioUnitInitialize)
from our own thread. In the latter case, osxbuf can be already locked
(GStreamer's mutex is not recursive).
We introduce the flag cached_caps_valid and use it instead of nullifying
cached_caps when we cannot lock on osxbuf.
https://bugzilla.gnome.org/show_bug.cgi?id=743758
- Probing caps is unified between source and sink
- Hardware stream format is now reported as preferred capabilities
(dynamically updated when hardware configuration changes)
- Get hardware channel layout from Remote IO just like from HAL
- More comprehensive mapping between AudioChannelLabel and
GstAudioChannelPosition
- Support for unpositioned channel layouts
- Announce stereo-mono upmixing/downmixing in caps
https://bugzilla.gnome.org/show_bug.cgi?id=743758
For more optimised RTP packet handling: means we don't
need to map the input buffer again but can just re-use
the mapping the base class has already done.
https://bugzilla.gnome.org/show_bug.cgi?id=750235
For more optimised RTP packet handling: means we don't
need to map the input buffer again but can just re-use
the map the base class has already done.
https://bugzilla.gnome.org/show_bug.cgi?id=750235
Estimating it from the RTP time will give us the PTS, so in cases of PTS!=DTS
we would produce wrong DTS. As now the estimated DTS is based on the clock,
don't store it in the jitterbuffer items as it would otherwise be used in the
skew calculations and would influence the results. We only really need the DTS
for timer calculations.
https://bugzilla.gnome.org/show_bug.cgi?id=749536
Replace static constants with macros to make gcc happy
CC elements/elements_rtpjitterbuffer-rtpjitterbuffer.o
elements/rtpjitterbuffer.c:387:1: error: initializer element is not constant
static const GstClockTime PCMU_BUF_DURATION = PCMU_BUF_MS * GST_MSECOND;
^
elements/rtpjitterbuffer.c:388:1: error: initializer element is not constant
static const guint PCMU_BUF_SIZE = 64000 * PCMU_BUF_MS / 1000;
^
elements/rtpjitterbuffer.c:390:5: error: initializer element is not constant
PCMU_BUF_CLOCK_RATE * PCMU_BUF_MS / 1000;
When a new time segment is received upstream is going to restart
with a new atom. Make the neededbytes and todrop variables
reflect that to avoid waiting too much or dropping the
initial bytes that contain the header.
The adapter might have data remaining from the previous segment,
push it all before clearing the adapter and starting a new segment.
It can accumulate data if it had pushed and got not-linked, returning
immediately without processing all the data. Before starting a new
segment this data should be handled.
The amount of time that is completely expired and not worth waiting for,
is the duration of the packets in the gap (gap * duration) - the
latency (size) of the jitterbuffer (priv->latency_ns). This is the duration
that we make a "multi-lost" packet for.
The "late" concept made some sense in 0.10 as it reflected that a buffer
coming in had not been waited for at all, but had a timestamp that was
outside the jitterbuffer to wait for. With the rewrite of the waiting
(timeout) mechanism in 1.0, this no longer makes any sense, and the
variable no longer reflects anything meaningful (num > 0 is useless,
the duration is what matters)
Fixed up the tests that had been slightly modified in 1.0 to allow faulty
behavior to sneak in, and port some of them to use GstHarness.
https://bugzilla.gnome.org/show_bug.cgi?id=738363
This reverts commit 05bd708fc5.
The reverted patch is wrong and introduces a regression because there
may still be time to receive some of the packets included in the gap
if they are reordered.
Avoids accumulating all samples from a fragmented stream that could
lead to a 'index-too-big' error once it goes over 50MB of data. It
could reach that before 2h of playback so it doesn't take that long.
As upstream elements are providing data in time format they should
be the ones that have more information about the full media index
and should be able to seek if possible.
upstream_newsegment isn't really clear on what it means, it is set
to TRUE when the upstream element sends a segment in TIME format, so
rename it to be more clear about it.
It is important to know this because it means that upstream has
a notion of time and qtdemux is likely being driven by an upstream
element that is reading from a higher level abstraction than a file,
such as a DASH, MSS or DLNA element.
In fragmented streaming, multiple moov/moof will be parsed and their
previously stored samples array might leak when new values are parsed.
The parse_trak and callees won't free the previously stored values
before parsing the new ones.
In step-by-step, this is what happens:
1) initial moov is parsed, traks as well, streams are created. The
trak doesn't contain samples because they are in the moof's trun
boxes. n_samples is set to 0 while parsing the trak and the samples
array is still NULL.
2) moofs are parsed, and their trun boxes will increase n_samples and
create/extend the samples array
3) At some point a new moov might be sent (bitrate switching, for example)
and parsing the trak will overwrite n_samples with the values from
this trak. If the n_samples is set to 0 qtdemux will assume that
the samples array is NULL and will leak it when a new one is
created for the subsequent moofs.
This patch makes qtdemux properly free previous sample data before
creating new ones and adds an assert to catch future occurrences of
this issue when the code changes.
Also make it so that the mtu is always set if specified, not
only in case of the rather weird bufferlist test code path.
This allows us to easily make the payloader fragment a payload
across multiple output packets by setting a small MTU on it.
This reverts commit d46631c5c7.
pad only handle EOS events but not EOS flow, and will push the buffer again
resulting in an assertion error. So we should not handle the buffer
and return EOS flow.