Assume that small backward PCR jumps are just from upstream packet
mis-ordering and don't reset timestamp tracking state - assuming that
things will be OK again shortly.
Make the threshold for detecting discont between sequential buffers
configurable and match the smoothing-latency setting on tsparse
to better cope with data bursts.
All pads of a stream are now added at the beginning. In order to cope with
streams that don't get any data (forever or for a long time) we detect gaps
and push out GAP events when needed.
Cleanups and commenting by Jan Schmidt <jan@centricular.com>
https://bugzilla.gnome.org/show_bug.cgi?id=734040
32 bit integers are going to overflow, especially the PCR offset to
the first PCR will overflow after about 159 seconds. This makes playback
of streams stop at 159 seconds as suddenly the timestamps are starting
again from 0. Now we have a few more years time until it happens again
and 64 bits are too small.
It was previously a mix and match of both variants, introducing just too much
confusion.
The prefix are from now on:
* GstMpegts for structures and type names (and not GstMpegTs)
* gst_mpegts_ for functions (and not gst_mpeg_ts_)
* GST_MPEGTS_ for enums/flags (and not GST_MPEG_TS_)
* GST_TYPE_MPEGTS_ for types (and not GST_TYPE_MPEG_TS_)
The rationale for chosing that is:
* the namespace is shorter/direct (it's mpegts, not mpeg_ts nor mpeg-ts)
* the namespace is one word under Gst
* it's shorter (yah)
When wrapover/reset occur, we end up with a small window of time where
the PTS/DTS will still be using the previous/next time-range.
In order not to return bogus values, return GST_CLOCK_TIME_NONE if the
PTS/DTS value to convert differs by more than 15s against the last seen
PCR
https://bugzilla.gnome.org/show_bug.cgi?id=674536
Using 32bit unsigned values for corrected pcr/offset meant that we
potentially ended up in bogus values
Furthermore, refpcr - refpcroffset could end up being negative, which
PCRTIME_TO_GSTTIME() can't handle (and returned a massive positive value)
If _set_current_pcr_offset gets called after a flushing seek, we ended
up using the current group for delta calculation ... whereas we should
be using the first group to calculate shifts.
Also add an early exit if there are no changes to apply
When working in push mode, we need to be able to evaluate the duration
based on a single group of observations.
To do that we use the current group values
When handling the PTS/DTS conversion in new groups, there's a possibility
that the PTS might be smaller than the first PCR value observed, due to
re-ordering.
When using the current group, only apply the wraparound correction when we
are certain it is one (i.e. differs by more than a second) and not when it's
just a small difference (like out-of-order PTS).
https://bugzilla.gnome.org/show_bug.cgi?id=731088
* Search in current pending values first. For CBR streams we can very
easily end up having just one initial observations and then nothing
else (since the bitrate doesn't change).
* Use one group whether we are in that group *OR* if there is only
one group.
* If the group to use isn't closed (points are being accumulated in the
PCROffsetCurrent), use the latest data available for calculation
* If in the unlikelyness that all of this *still* didn't produce more
than one data point, just return the initial offset
While this probably should never happen if callers are well behaved,
this avoids a crash if it does. With a warning about it. Unsure if
it'd be better to not add at all, but it should not happen...
Coverity 1139713
The requested TS might be beyond the last observed PCR. In order to calculate
a coherent offset, we need to use the last and previous-to-last groups.
https://bugzilla.gnome.org/show_bug.cgi?id=721035
This allows:
* Better duration estimation
* More accurate PCR location
* Overall more accurate running-time location and calculation
Location and values of PCR are recorded in groups (PCROffsetGroup)
with notable PCR/Offset observations in them (when bitrate changed
for example). PCR and offset are stored as 32bit values to
reduce memory usage (they are differences against that group's
first_{pcr|offset}.
Those groups each contain a global PCR offset (pcr_offset) which
indicates how far in the stream that group is.
Whenever new PCR values are observed, we store them in a sliding
window estimator (PCROffsetGroupCurrent).
When a reset/wrapover/gap is detected, we close the current group with
current values and start a new one (the pcr_offset of that new group
is also calculated).
When a notable change in bitrate is observed (+/- 10%), we record
new values in the current group. This is a compromise between
storing all PCR/offset observations and none, while at the same time
providing better information for running-time<=>offset calculation
in VBR streams.
Whenever a new non-contiguous group is start (due to seeking for example)
we re-evaluate the pcr_offset of each groups. This allows detecting as
quickly as possible PCR wrapover/reset.
When wanting to find the offset of a certain running-time, one can
iterate the groups by looking at the pcr_offset (which in essence *is*
the running-time of that group in the overall stream).
Once a group (or neighbouring groups if the running-time is between two
groups) is found, once can use the recorded values to find the most
accurate offset.
Right now this code is only used in pull-mode , but could also
be activated later on for any seekable stream, like live timeshift
with queue2.
Future improvements:
* some heuristics to "compress" the stored values in groups so as to keep
the memory usage down while still keeping a decent amount of notable
points.
* After a seek compare expected and obtained PCR/Offset and if the
difference is too big, re-calculate position with newly observed
values and seek to that more accurate position.
Note that this code will *not* provide keyframe-accurate seeking, but
will allow a much more accurate PCR/running-time/offset location on
any random stream.
For past (observed) values it will be as accurate as can be.
For future values it will be better than the current situation.
Finally the more you seek, the more accurate your positioning will be.
The previous code could enter an infinite loop because the adapter state
could get out of sync with its mapped data state after sync was lost.
The code was pretty confusing so it's been rewritten to be clearer.
The easiest way to reproduce the infinite loop is to use the breakmydata
element before tsdemux to trigger a resync.
https://bugzilla.gnome.org/show_bug.cgi?id=708161
If ever we lose sync, we were just checking for the next 0x47 marker ...
which might actually happen within a mpeg-ts packet.
Instead check for 3 repeating 0x47 at the expected packet size interval,
which the same logic we use when we initially look for the packet size.
We were only resetting the first 512 values of the lookup table instead
of the whole 8192.
This resulted in any PCR PID over 0x0200 ... ending up taking the first PCR
table around :(
Helps with debugging issues. And also remove unused variable (opcr)
This will also allow us in the future to properly detect:
* random-access location (to enable keyframe observation and
potentially seeking
* discont location (to properly handle resets)
* splice location (to properly handle new stream changes)
This is actually a workaround (we'll be skipping the upcoming section)
This will only happen for sections where the beginning is located within
the last 8 bytes of a packet (which is the minimum we need to properly
identify any section beginning).
Later we should figure out a way to store those bytes and mark that
some analysis needs to happen. The probability of this happening is
too low for me to care right now and do that fix. There is a good chance
that section will eventually be repeated and won't end up on such border.
* packet.origts is no longer used since the PCR refactoring done ages ago
* known_packet_size is a duplicate of packet_size != 0
* caps was never used outside of the packetizer
We had two issues with the previous code:
1) We were badly handling PUSI-flagged packets. We were discarding the
initial data (if pointer != 0) whereas we should have been accumulating
it with the previous data (if there was a continuity of course).
=> First series of information loss
2) We were not checking whether there were more sections after the end
of one (i.e. when the following byte was not a stuff byte).
This fixes those two issues.
Fixes#677443https://bugzilla.gnome.org/show_bug.cgi?id=677443
* Only mpeg-ts section packetization remains.
* Improve code to detect duplicated sections as early as possible
* Add FIXME for various issues that need fixing (but are not regressions)
https://bugzilla.gnome.org/show_bug.cgi?id=702724
Only create subtables when needed. It was previously creating one every
single time ... to check if one was present.
And speed up code to detect whether a subtable was already present or not.
Overall makes section pushing 2 times faster.
In some cases (NIT on highly-populated DVB-C operator for example), there
will be more than one section emitted for the same subtable and version
number.
In order not to lose those updates for the same version number, we checked
against the CRC of the previous section we parsed.
The problem is that, while it made sure we didn't lose any information, it
also meant that if the same section came back (same version, same CRC) later
on we would re-process it, re-parse it and re-emit it.
This version improves on that by keeping a list of previously observed CRC
for identical PID/subtable/version-number and will only process sections if
they really were never seen in the past (as opposed to just before).
On a 30s clip, this brings down the number of NIT section parsing from 4541
down to 663.
https://bugzilla.gnome.org/show_bug.cgi?id=614479