Such seeks are used to change playback rate and we do not want
to alter the position in that case, so we bypass the flush/seek
logic, and set things up so a new segment is scheduled to be
regenerated.
https://bugzilla.gnome.org/show_bug.cgi?id=735100
This will happen when the PMT changes, replacing streams with
new ones. In that case, we need to accumulate the running time
from the previous chain in the segment base.
https://bugzilla.gnome.org/show_bug.cgi?id=745102
Always update the segment and not only for accurate seeking and always
send a new segment event after seeks.
For non-accurate force a reset of our segment info to start from
where our seek led us as we don't need to be accurate
https://bugzilla.gnome.org/show_bug.cgi?id=743363
The flush is called on discont and we shouldn't output a new segment
each time a discont happens. So this commit remove the mark for a new
segment when flushing streams by propagating the 'hard' flag passed
on the flusing from the base class.
https://bugzilla.gnome.org/show_bug.cgi?id=743363
When dealing with random-access content (such as files), we initially
search for the last PCR in order to figure out duration and to handle
other position estimation such as those used in seeking.
Previously, the code looking for that last PCR would search in the last
640kB of the file going forward, and stop at the first PCR encountered.
The problem with that was two-fold:
* It wouldn't really be the last PCR (it would be the first one within
those last 640kB. In case of VBR files, this would put off duration
and seek code slightly.
* It would fail on files with bitrates higher than 52Mbit/s (not common)
Instead this patch modifies that code by:
* Scanning over the last 2048kB (allows to cope with streams up to 160Mbit/s)
* Starts by the end of the file, going over chunks of 300 MPEG-TS packets
* Doesn't stop at the first PCR detected in a chunk, but instead records all
of them, and only stop searching if there was "at least" one PCR within
that chunk
This should improve duration reporting and seeking operations on VBR files
https://bugzilla.gnome.org/show_bug.cgi?id=708532
As a consequence, tsdemux won't remove its pads anymore on EOS.
Fixes the case when mpegtsbase is not able to process new packets
after EOS as the corresponding pids aren't known anymore because
the programs were removed and the pes/psi were kept, preventing the
PAT to be parsed again.
https://bugzilla.gnome.org/show_bug.cgi?id=738695
Assume that small backward PCR jumps are just from upstream packet
mis-ordering and don't reset timestamp tracking state - assuming that
things will be OK again shortly.
Make the threshold for detecting discont between sequential buffers
configurable and match the smoothing-latency setting on tsparse
to better cope with data bursts.
When the set-timestamps property is set, use PCRs on the provided
(or autodetected) pcr-pid to apply (or replace) timestamps on the
output buffers, using piece-wise linear interpolation.
This allows tsparse to be used to stream an arbitrary mpeg-ts file,
or to smooth jittery reception timestamps from a network stream.
The reported latency is increased to match the smoothing latency if
necessary.
Signal sparse streams properly in stream-start event and force sending
of pending sticky events which have been stored on the pad already and
which otherwise would only be sent on the first buffer or serialized
event (which means very late in case of subtitle streams). Playsink in
playbin waits for stream-start or another serialized event, and if we
don't do this it will wait for the multiqueue to run full before
starting playback, which might take a couple of seconds.
https://bugzilla.gnome.org/show_bug.cgi?id=734040
All pads of a stream are now added at the beginning. In order to cope with
streams that don't get any data (forever or for a long time) we detect gaps
and push out GAP events when needed.
Cleanups and commenting by Jan Schmidt <jan@centricular.com>
https://bugzilla.gnome.org/show_bug.cgi?id=734040
If a discontinuity in the stream is detected, data is discarded until
a new PES starts. If the first packet after the discontinuity is also
the start of a PES, there is no reason to discard the packets.
https://bugzilla.gnome.org/show_bug.cgi?id=737569
packet_length is defined as a guint16 in the PESHeader structure. This
definition match the specification. But since we add 6 bytes to the
packet_length value (length of start_code + stream_id + packet_length),
we can overflow the guint16 when the value in the PES header is greater
than 65529.
So use a guint32 instead of a guint16 to avoid overflow.
https://bugzilla.gnome.org/show_bug.cgi?id=736490
32 bit integers are going to overflow, especially the PCR offset to
the first PCR will overflow after about 159 seconds. This makes playback
of streams stop at 159 seconds as suddenly the timestamps are starting
again from 0. Now we have a few more years time until it happens again
and 64 bits are too small.
It was previously a mix and match of both variants, introducing just too much
confusion.
The prefix are from now on:
* GstMpegts for structures and type names (and not GstMpegTs)
* gst_mpegts_ for functions (and not gst_mpeg_ts_)
* GST_MPEGTS_ for enums/flags (and not GST_MPEG_TS_)
* GST_TYPE_MPEGTS_ for types (and not GST_TYPE_MPEG_TS_)
The rationale for chosing that is:
* the namespace is shorter/direct (it's mpegts, not mpeg_ts nor mpeg-ts)
* the namespace is one word under Gst
* it's shorter (yah)
When wrapover/reset occur, we end up with a small window of time where
the PTS/DTS will still be using the previous/next time-range.
In order not to return bogus values, return GST_CLOCK_TIME_NONE if the
PTS/DTS value to convert differs by more than 15s against the last seen
PCR
https://bugzilla.gnome.org/show_bug.cgi?id=674536
Using 32bit unsigned values for corrected pcr/offset meant that we
potentially ended up in bogus values
Furthermore, refpcr - refpcroffset could end up being negative, which
PCRTIME_TO_GSTTIME() can't handle (and returned a massive positive value)
Co-Authored by: Thibault Saunier <tsaunier@gnome.org>
From a high level perspective, the new process for seeking h264
streams is as follows:
1) Rewind the stream until we find the first I-slice of a frame,
and mark its offset in the stream.
2) Rewind the stream until we find SPS and PPS informations,
to make sure the subsequent parser is up to date.
3) Accumulate optionnal SEI NAL units on the way.
4) Push the SPS, PPS and SEI units before the new keyframe.
https://bugzilla.gnome.org/show_bug.cgi?id=675132
If _set_current_pcr_offset gets called after a flushing seek, we ended
up using the current group for delta calculation ... whereas we should
be using the first group to calculate shifts.
Also add an early exit if there are no changes to apply
When working in push mode, we need to be able to evaluate the duration
based on a single group of observations.
To do that we use the current group values
When handling the PTS/DTS conversion in new groups, there's a possibility
that the PTS might be smaller than the first PCR value observed, due to
re-ordering.
When using the current group, only apply the wraparound correction when we
are certain it is one (i.e. differs by more than a second) and not when it's
just a small difference (like out-of-order PTS).
https://bugzilla.gnome.org/show_bug.cgi?id=731088
When we receive sticky events from upstream, always return TRUE.
Fixes the issue where we receive custom sticky events (such as "uri")
and no pads are created yet.
Since all the other timestamp tracking now gets reset on a discont,
it makes sense to wait for a PCR and timestamp buffers like when
playback first starts
Due to mpegts streaming nature some pads are created but are only added
later to the element. This can cause a scenario where the first stream
doesn't have an available decoder (while the next ones still pending
would have) and tsdemux will fail with not-linked as the first stream
added wouldn't be linked.
To avoid this tsdemux needs to add pads to the flowcombiner
when they are created instead of only when adding them to the
element.
* Search in current pending values first. For CBR streams we can very
easily end up having just one initial observations and then nothing
else (since the bitrate doesn't change).
* Use one group whether we are in that group *OR* if there is only
one group.
* If the group to use isn't closed (points are being accumulated in the
PCROffsetCurrent), use the latest data available for calculation
* If in the unlikelyness that all of this *still* didn't produce more
than one data point, just return the initial offset
While the calculation done in these macros will work with 64bit
integers, they will fail if working with 32bit integers.
Force the scaling up to solve that.
This amazingly didn't introduce major issues up to now, but resulted
in bogus values in debug logs.
Doing a hard flush on the packetizer will drop all observations, which
will eventually break push-based seeking (with BYTES segment) since
we won't know where to seek to anymore (new data would always be
considered as the beginning of the stream).
While this probably should never happen if callers are well behaved,
this avoids a crash if it does. With a warning about it. Unsure if
it'd be better to not add at all, but it should not happen...
Coverity 1139713
gst_ts_demux_push_pending_data() will check if it now can activate the
stream and add the pad, we don't have to check that ourselves.
Fixes playback of very short MPEG TS files.
Apart from just adding detection of the proper stream type, we also need to only
output the first substream (0x71) which contains the core substream.
While this does not provide *full* DTS-HD support (since it will miss the complementary
substreams), it will still work in the way legacy (non-DTS-HD) bluray players would work.
https://bugzilla.gnome.org/show_bug.cgi?id=725563
Keep a list of current global tags around and push them
whenever a new stream is started. Also convert all stream
specific tags to global as they are stream specific for
the container, so they are global for the streams from
within that container.
https://bugzilla.gnome.org/show_bug.cgi?id=644395
The PAT is related to the stream, we therefore want it cleared along
with anything stream related.
This commented section was from the (old) mpegtsparse and *might* have
been related to speeding up DVB start-up. But we have another plan for that.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=724716
The requested TS might be beyond the last observed PCR. In order to calculate
a coherent offset, we need to use the last and previous-to-last groups.
https://bugzilla.gnome.org/show_bug.cgi?id=721035
The original code (old mpegtsparse) from which this plugin was based on
was dual-licensed. This allowed usage of the code under any of the
licenses (which including LGPL):
"""
* Alternatively, the contents of this file may be used under the terms of
* the GNU Lesser General Public License Version 2 or later (the "LGPL"),
* in which case the provisions of the LGPL are applicable instead
* of those above. If you wish to allow use of your version of this file only
* under the terms of the LGPL, and not to allow others to
* use your version of this file under the terms of the MPL, indicate your
* decision by deleting the provisions above and replace them with the notice
* and other provisions required by the LGPL. If you do not delete
* the provisions above, a recipient may use your version of this file under
* the terms of the MPL or the LGPL.
"""
When refactored (leading to the creation of this new plugin), I chose all
new code to be LGPL-only (which was allowed for pre-existing code) by removing
the MPL sections.
The headers were all updated, but not the plugin license field. This commit
fixes this.
It is quite possible that we might get PTS/DTS before the first
PCR/Offset observation.
In order to end up with valid timestamp we wait until at least one
stream was able to get a proper running-time for any PTS/DTS.
Until then, we queue up the pending buffers to push out.
Once we see a first valid timestamp, we re-evaluate the amount of
running-time elapsed (based on returned inital running-time and amount
of data/DTS queued up) for any given stream.
Taking the biggest amount of elapsed time, we set that on the packetizer
as the initial offset and recalculate all pending buffers running-time
PTS/DTS.
Note: The buffer queueing system can also be used later on for the
dvb fast start proposal (where we queue up all stream packets before
seeing PAT/PMT and then push them once we know if they belong to the
chosen program).
This allows:
* Better duration estimation
* More accurate PCR location
* Overall more accurate running-time location and calculation
Location and values of PCR are recorded in groups (PCROffsetGroup)
with notable PCR/Offset observations in them (when bitrate changed
for example). PCR and offset are stored as 32bit values to
reduce memory usage (they are differences against that group's
first_{pcr|offset}.
Those groups each contain a global PCR offset (pcr_offset) which
indicates how far in the stream that group is.
Whenever new PCR values are observed, we store them in a sliding
window estimator (PCROffsetGroupCurrent).
When a reset/wrapover/gap is detected, we close the current group with
current values and start a new one (the pcr_offset of that new group
is also calculated).
When a notable change in bitrate is observed (+/- 10%), we record
new values in the current group. This is a compromise between
storing all PCR/offset observations and none, while at the same time
providing better information for running-time<=>offset calculation
in VBR streams.
Whenever a new non-contiguous group is start (due to seeking for example)
we re-evaluate the pcr_offset of each groups. This allows detecting as
quickly as possible PCR wrapover/reset.
When wanting to find the offset of a certain running-time, one can
iterate the groups by looking at the pcr_offset (which in essence *is*
the running-time of that group in the overall stream).
Once a group (or neighbouring groups if the running-time is between two
groups) is found, once can use the recorded values to find the most
accurate offset.
Right now this code is only used in pull-mode , but could also
be activated later on for any seekable stream, like live timeshift
with queue2.
Future improvements:
* some heuristics to "compress" the stored values in groups so as to keep
the memory usage down while still keeping a decent amount of notable
points.
* After a seek compare expected and obtained PCR/Offset and if the
difference is too big, re-calculate position with newly observed
values and seek to that more accurate position.
Note that this code will *not* provide keyframe-accurate seeking, but
will allow a much more accurate PCR/running-time/offset location on
any random stream.
For past (observed) values it will be as accurate as can be.
For future values it will be better than the current situation.
Finally the more you seek, the more accurate your positioning will be.
The previous code could enter an infinite loop because the adapter state
could get out of sync with its mapped data state after sync was lost.
The code was pretty confusing so it's been rewritten to be clearer.
The easiest way to reproduce the infinite loop is to use the breakmydata
element before tsdemux to trigger a resync.
https://bugzilla.gnome.org/show_bug.cgi?id=708161
Some streams had wrong values for the stream_id_extension, make sure
we only remember the valid ones.
For streams with PES_extension_field_length == 0, assume there's nothing
else.
For streams that state they have a TREF extension but don't have enough
data to store it, just assume it was produced by a non-compliant muxer
and skip the remaining data.
Only store remaining data in stream_id_extension_data instead of storing
data we already parse.
If ever we lose sync, we were just checking for the next 0x47 marker ...
which might actually happen within a mpeg-ts packet.
Instead check for 3 repeating 0x47 at the expected packet size interval,
which the same logic we use when we initially look for the packet size.
We were only resetting the first 512 values of the lookup table instead
of the whole 8192.
This resulted in any PCR PID over 0x0200 ... ending up taking the first PCR
table around :(
ATSC ac3 streams are always guaranteed to be AC3 if EAC3 descriptor
is not present
If stream registration id is 'AC-3' then it's also guaranteed to be AC3.
Finally if AC3 descriptor is present it's guaranteed to be AC3.
Only silences a warning, but still.
We know we will not overflow 64 bits, therefore just use direct
multiplication/division instead of the scale method (trims usage from
50 instruction calls to 2/3).
Helps with debugging issues. And also remove unused variable (opcr)
This will also allow us in the future to properly detect:
* random-access location (to enable keyframe observation and
potentially seeking
* discont location (to properly handle resets)
* splice location (to properly handle new stream changes)
The new seek handling re-creates the segment time information once it
has enough information after a seek.
The problem was that we'd completely ignore the requested rate. So store
that and use it in the newly created segment.
https://bugzilla.gnome.org/show_bug.cgi?id=694369
The program_number attribute was overloaded, trying to indicate both
the currently playing program, and the program requested via the
"program-number" property. The end result was that setting the
property didn't work (see #690934).
I added a new requested_program_number field rather than reviving the
current_program_number field because it seemed this would result in
fewer changes overall and be less confusing. It breaks symmetry with
the "program-number" property, but it retains parallels with the likes
of program->program_number.
Because gst_ts_demux_reset is called after the properties have been
parsed, requested_program_number is initialised in gst_ts_demux_init.
Whether this is exactly the right place, I don't know.
Setting the program-number property does not affect which program
is actually being demuxed.
Moving the initialization of the program_number from
gst_ts_demux_reset to gst_ts_demux_init seems to fix this issue.
https://bugzilla.gnome.org/show_bug.cgi?id=690934
* Avoids handling twice the same seek (can happen with playbin and files
with subtitles)
* Set the sequence number of the segment event to the sequence number of
the seek event that generated it (-1 for the initial one).
The seeking start time is approximated from the seek offset in bytes
using the accumulated PCR observations, so on a VBR stream there might
be a big difference between the actual PCR and the estimated one after
the seek. This might result in a long wait to skip all out of segments
packets.
Instead we just recalculate the new segment to start at the first PTS
after the seek, so that playback starts immediatly.
This is actually a workaround (we'll be skipping the upcoming section)
This will only happen for sections where the beginning is located within
the last 8 bytes of a packet (which is the minimum we need to properly
identify any section beginning).
Later we should figure out a way to store those bytes and mark that
some analysis needs to happen. The probability of this happening is
too low for me to care right now and do that fix. There is a good chance
that section will eventually be repeated and won't end up on such border.
* packet.origts is no longer used since the PCR refactoring done ages ago
* known_packet_size is a duplicate of packet_size != 0
* caps was never used outside of the packetizer
We had two issues with the previous code:
1) We were badly handling PUSI-flagged packets. We were discarding the
initial data (if pointer != 0) whereas we should have been accumulating
it with the previous data (if there was a continuity of course).
=> First series of information loss
2) We were not checking whether there were more sections after the end
of one (i.e. when the following byte was not a stuff byte).
This fixes those two issues.
Fixes#677443https://bugzilla.gnome.org/show_bug.cgi?id=677443
Until now we simply ignored those streams (since we couldn't do anything
with it anyway). Now that we have the mpegts library and we offload the
section handling to the application side we can properly identify and
extract them.
By default it is disabled for tsparse and enabled for tsdemux, but there is
a property to change that.
This should open the way to properly handle all private section streams,
including:
* DSM-CC
* MHEG
* Carousel data
* Metadata streams (though I haven't seen any of those in the wild)
* ... And all other specs/protocols making use of those
Partially fixes#560631
Since we now send all sections to the packetizer, we no longer need to do
anymore in-depth checks for the validity of a section.
The choice boils down to:
1) Is it from a known PES pid ? If so pass it on (which might be just pushing
downstream in the case of tsparse, or accumulating PES data for tsdemux)
2) Is it from a known SI pid ? If so pass it to the section packetizer
We still have some other stream types which haven't been ported, but
we will do so once we have defined the enums in the mpegts library.
Also add some FIXMEs regarding items discovered during analysis
* Only mpeg-ts section packetization remains.
* Improve code to detect duplicated sections as early as possible
* Add FIXME for various issues that need fixing (but are not regressions)
https://bugzilla.gnome.org/show_bug.cgi?id=702724
We use add_stream(stream_type:-1) to ensure a programs' PCR Stream is
also taken into account. For most programs this will re-use an
existing ES stream.
So only warn that we are re-adding a stream if it was already present
AND it is not to ensure the PCR stream is taken into account.
Only create subtables when needed. It was previously creating one every
single time ... to check if one was present.
And speed up code to detect whether a subtable was already present or not.
Overall makes section pushing 2 times faster.
In some cases (NIT on highly-populated DVB-C operator for example), there
will be more than one section emitted for the same subtable and version
number.
In order not to lose those updates for the same version number, we checked
against the CRC of the previous section we parsed.
The problem is that, while it made sure we didn't lose any information, it
also meant that if the same section came back (same version, same CRC) later
on we would re-process it, re-parse it and re-emit it.
This version improves on that by keeping a list of previously observed CRC
for identical PID/subtable/version-number and will only process sections if
they really were never seen in the past (as opposed to just before).
On a 30s clip, this brings down the number of NIT section parsing from 4541
down to 663.
https://bugzilla.gnome.org/show_bug.cgi?id=614479
First send stream-start, then caps, then segment.
The segment we push is from upstream in push-mode. If we work in pull-mode
then we initialize the base segment to BYTES.
https://bugzilla.gnome.org/show_bug.cgi?id=702422