The previous code could enter an infinite loop because the adapter state
could get out of sync with its mapped data state after sync was lost.
The code was pretty confusing so it's been rewritten to be clearer.
The easiest way to reproduce the infinite loop is to use the breakmydata
element before tsdemux to trigger a resync.
https://bugzilla.gnome.org/show_bug.cgi?id=708161
Some streams had wrong values for the stream_id_extension, make sure
we only remember the valid ones.
For streams with PES_extension_field_length == 0, assume there's nothing
else.
For streams that state they have a TREF extension but don't have enough
data to store it, just assume it was produced by a non-compliant muxer
and skip the remaining data.
Only store remaining data in stream_id_extension_data instead of storing
data we already parse.
If ever we lose sync, we were just checking for the next 0x47 marker ...
which might actually happen within a mpeg-ts packet.
Instead check for 3 repeating 0x47 at the expected packet size interval,
which the same logic we use when we initially look for the packet size.
We were only resetting the first 512 values of the lookup table instead
of the whole 8192.
This resulted in any PCR PID over 0x0200 ... ending up taking the first PCR
table around :(
ATSC ac3 streams are always guaranteed to be AC3 if EAC3 descriptor
is not present
If stream registration id is 'AC-3' then it's also guaranteed to be AC3.
Finally if AC3 descriptor is present it's guaranteed to be AC3.
Only silences a warning, but still.
We know we will not overflow 64 bits, therefore just use direct
multiplication/division instead of the scale method (trims usage from
50 instruction calls to 2/3).
Helps with debugging issues. And also remove unused variable (opcr)
This will also allow us in the future to properly detect:
* random-access location (to enable keyframe observation and
potentially seeking
* discont location (to properly handle resets)
* splice location (to properly handle new stream changes)
The new seek handling re-creates the segment time information once it
has enough information after a seek.
The problem was that we'd completely ignore the requested rate. So store
that and use it in the newly created segment.
https://bugzilla.gnome.org/show_bug.cgi?id=694369
The program_number attribute was overloaded, trying to indicate both
the currently playing program, and the program requested via the
"program-number" property. The end result was that setting the
property didn't work (see #690934).
I added a new requested_program_number field rather than reviving the
current_program_number field because it seemed this would result in
fewer changes overall and be less confusing. It breaks symmetry with
the "program-number" property, but it retains parallels with the likes
of program->program_number.
Because gst_ts_demux_reset is called after the properties have been
parsed, requested_program_number is initialised in gst_ts_demux_init.
Whether this is exactly the right place, I don't know.
Setting the program-number property does not affect which program
is actually being demuxed.
Moving the initialization of the program_number from
gst_ts_demux_reset to gst_ts_demux_init seems to fix this issue.
https://bugzilla.gnome.org/show_bug.cgi?id=690934
* Avoids handling twice the same seek (can happen with playbin and files
with subtitles)
* Set the sequence number of the segment event to the sequence number of
the seek event that generated it (-1 for the initial one).
The seeking start time is approximated from the seek offset in bytes
using the accumulated PCR observations, so on a VBR stream there might
be a big difference between the actual PCR and the estimated one after
the seek. This might result in a long wait to skip all out of segments
packets.
Instead we just recalculate the new segment to start at the first PTS
after the seek, so that playback starts immediatly.
This is actually a workaround (we'll be skipping the upcoming section)
This will only happen for sections where the beginning is located within
the last 8 bytes of a packet (which is the minimum we need to properly
identify any section beginning).
Later we should figure out a way to store those bytes and mark that
some analysis needs to happen. The probability of this happening is
too low for me to care right now and do that fix. There is a good chance
that section will eventually be repeated and won't end up on such border.
* packet.origts is no longer used since the PCR refactoring done ages ago
* known_packet_size is a duplicate of packet_size != 0
* caps was never used outside of the packetizer
We had two issues with the previous code:
1) We were badly handling PUSI-flagged packets. We were discarding the
initial data (if pointer != 0) whereas we should have been accumulating
it with the previous data (if there was a continuity of course).
=> First series of information loss
2) We were not checking whether there were more sections after the end
of one (i.e. when the following byte was not a stuff byte).
This fixes those two issues.
Fixes#677443https://bugzilla.gnome.org/show_bug.cgi?id=677443
Until now we simply ignored those streams (since we couldn't do anything
with it anyway). Now that we have the mpegts library and we offload the
section handling to the application side we can properly identify and
extract them.
By default it is disabled for tsparse and enabled for tsdemux, but there is
a property to change that.
This should open the way to properly handle all private section streams,
including:
* DSM-CC
* MHEG
* Carousel data
* Metadata streams (though I haven't seen any of those in the wild)
* ... And all other specs/protocols making use of those
Partially fixes#560631
Since we now send all sections to the packetizer, we no longer need to do
anymore in-depth checks for the validity of a section.
The choice boils down to:
1) Is it from a known PES pid ? If so pass it on (which might be just pushing
downstream in the case of tsparse, or accumulating PES data for tsdemux)
2) Is it from a known SI pid ? If so pass it to the section packetizer
We still have some other stream types which haven't been ported, but
we will do so once we have defined the enums in the mpegts library.
Also add some FIXMEs regarding items discovered during analysis
* Only mpeg-ts section packetization remains.
* Improve code to detect duplicated sections as early as possible
* Add FIXME for various issues that need fixing (but are not regressions)
https://bugzilla.gnome.org/show_bug.cgi?id=702724
We use add_stream(stream_type:-1) to ensure a programs' PCR Stream is
also taken into account. For most programs this will re-use an
existing ES stream.
So only warn that we are re-adding a stream if it was already present
AND it is not to ensure the PCR stream is taken into account.
Only create subtables when needed. It was previously creating one every
single time ... to check if one was present.
And speed up code to detect whether a subtable was already present or not.
Overall makes section pushing 2 times faster.
In some cases (NIT on highly-populated DVB-C operator for example), there
will be more than one section emitted for the same subtable and version
number.
In order not to lose those updates for the same version number, we checked
against the CRC of the previous section we parsed.
The problem is that, while it made sure we didn't lose any information, it
also meant that if the same section came back (same version, same CRC) later
on we would re-process it, re-parse it and re-emit it.
This version improves on that by keeping a list of previously observed CRC
for identical PID/subtable/version-number and will only process sections if
they really were never seen in the past (as opposed to just before).
On a 30s clip, this brings down the number of NIT section parsing from 4541
down to 663.
https://bugzilla.gnome.org/show_bug.cgi?id=614479
First send stream-start, then caps, then segment.
The segment we push is from upstream in push-mode. If we work in pull-mode
then we initialize the base segment to BYTES.
https://bugzilla.gnome.org/show_bug.cgi?id=702422
Sync byte scan is incorrect for M2TS streams because the timestamp 4
bytes were not included in the flush size. This can result in an
infinite loop.
Rework the scan code to be clearer and work in all cases.
descriptors are stored as a GValueArray of GString. The downside is
that there is no way to "pass" ownership of a GValue to a GValueArray
which previously resulted in expensive copy/free of the (already expensive)
GString.
Here we estimate first the size of the GValueArray, then create it,
then directly use the GValue of that array.
Speeds up total SI parsing by ~30%
Since there is a conflict between the DCII stream type and BluRay
stream types, moved the processing of BluRay-specific stream types
to the beginning of the function. Only if a BluRay stream type
IS NOT found do we proceed to check the rest of the stream type
identifiers
Previous code was also "sort-of" handling a similar conflict between
BluRay AC3 audio and standard AC3 audio. Moved the special case BluRay
AC3 handling in the main switch statement to the new BluRay-specific
switch.
https://bugzilla.gnome.org/show_bug.cgi?id=697892
And if we detect a discontinuity there (like... when losing packets
or having MPEGTS over raw UDP with out-of-order packets) we just
drop the corresponding packet.
A future version could try to implement a re-ordering algorithm based
on that, similar to what rtpjitterbuffer does.
Also reset segment info and drop the segment event when demuxer is
flushed.
Restore demuxer segment with the info stored in base when demuxer is
going to push data again if needed.
Drop code to recover the segment info from base in the initial program
becauses it's superseded by the new code.
Ensure the chain is not running before reset the state to avoid race
conditions and random corruptions downstream.
Also fixes segfaults in the packetizer due wrong available values that
causes gst_adapter_map to return a NULL pointer.
This reverts commit e14e310f71.
Would be better move the packetizer flushing to FLUSH_STOP and avoid
the race that way. Without introducing a memory barrier that could
have impact in the performance.
When dealing with non-time based push-mode streams, we need to revert
to using the offset-based PCR/PTS estimation logic of packetizer.
This solves uses cases such as:
pushfile:// ! tsdemux
src ! queue ! tsdemux
https://bugzilla.gnome.org/show_bug.cgi?id=687178
additional_copy_info: need to get rid of the highest
bit, not the lowest one
program_packet_sequence_counter: also need to get rid
of the highest bit instead of multiplying with a random
value
original_stuff_length: want to AND 0x3f to extract the
lowest 6 bits, not multiply by it.
None of these fields are actually used though, so these
should not have caused any issues.
This can be used to notify subclasses no more data is expected this
round.
tsparse will use it to push whole buffers (without copy) on the main
source pad.
It could also be used later to decide whether to push pending data
in order to reduce latency.
Avoids consistently failing to detect that a packet is complete, which
would then only be pushed upon the start of a next packet, which leads
to quite a delay in case of a sparse (subtitle) stream.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=666674
Original code to parse satellite delivery descriptors to generate
"satellite" structures appeared to be copy & pasted from cable's code
without amending for satellite.
Also added 8PSK to dvbsrc's enum for modulation.
https://bugzilla.gnome.org/show_bug.cgi?id=654485
Conflicts:
gst/mpegdemux/gstmpegdesc.h
gst/mpegdemux/mpegtspacketizer.c
tspad always has a static source pad which output everything received
(not functional yet).
Program pads are now request pads.
Remove all cruft that should have been removed from the switch over
to mpegtsbase.
Conflicts:
gst/mpegtsdemux/mpegtsparse.c
Peek as much as possible in one go. Ideally we should remove usage of
adapter altogether, but for the time being it provides a big enough
speedup (around 2 times faster per packet processed).
According to the specifications a PTS_DTS_flags value of 0x01 is forbidden.
... but there are some rare files out there that do that.
Instead of erroring out, let's warn, carry on parsing accordingly.
If the packet is really corrupted there are enough checks afterward to
detect that.
The overhead of creating/using 188 byte GstBuffer from GstAdapter
is too expensive.
We now peek the next packet, and provide a data/size which is only
valid until the packetizerpacket is cleared.
In addition, cleanup all the internal code to deal with that new
behaviour and remove double-checks which are no longer needed.
The section_length is now the corrected section_length (i.e. with
the additional 3 bytes).
Avoid using gst_adapter_prev_timestamp and instead track
the timestamp ourself.
Data should not be flushed out of the tsdemux because a payload unit start
indicator (pusi) is seen in a adaptation only ts packet. If the package contains no
payload a pusi does not indicate a new PES packet, but PSI information, etc.
This fixes playback of several TS files which contain ts packets without
payload but with pusi set to 1.
https://bugzilla.gnome.org/show_bug.cgi?id=676168
When the PES header tells us how big the outgoing packet is, push the
packet downstream as soon as we have the specified size instead of waiting
for the beginning of the next packet.
Reduces latency and removes issues with very sparse streams (like subtitles
and subpictures).
If we *really* can't figure out the first start position, that most
likely means the data to push out doesn't have any timestamp.
Use a default value of 0 then
Allows PCR<=>PTS<=>offset estimation/calculation
Right now the calculation is very naive, but can be extended later on
without disrupting the code in tsdemux/mpegtsbase
* Don't take into account packets that arrived at the same time as
previous ones for clock skew estimation
* Add convenience method for processing the next ts packet
This is probably the cause for an occasional crash while streaming
MPEG. Blind fix after staring at the code and following logic, so
may or may not fix the issue, I cannot test.
(Port of 4275a70cb5 from mpegdemux)
Sometimes their are several times the same PMT applying to a same program in a stream,
tsdemux was totally baffled when this was happening, we now keep the one we
already applied so it works properly.
When removing the current program, it will get freed by the
hash table removal callback, so ensure we clear our pointer
to it.
Fixes a crash later on in gst_ts_demux_push trying to access it.
https://bugzilla.gnome.org/show_bug.cgi?id=656927
When a program is changed, stream_added is called which sets the
need_newsegment to TRUE, then stream_removed is called, which calls
the flush_pending_data, which checks for the newsegment and causes
it to send a new-segment.
We must not send the newsegment when flushing the pending data on the
removed stream. We should only push it when flushing data on the newly
added streams (after they finish parsing their PTS header)
If a program/stream is changed, then a newsegment is sent which must
not be the same as the base segment since it happens later. We must
shift the start position by the time elapsed since the newsegment
and the current PTS of the stream
The program_stopped vmethod was called before stream_removed vmethod
was being called. Since we only did stream-related operations in there,
we just remove the program_stopped vmethod and do everything in the
stream_removed one.
Also, make sure we flush out all pending data before sending EOS.
stream_type is stored as guint inside the GstStructure but was retreived
using valist with a pointer to guint16. This would cause stack gardening
when code is compiled without optimisation (e.g. in -O0 the compiler wont
pad the stack to optimise out required mask).
https://bugzilla.gnome.org/show_bug.cgi?id=655540
We first activate new streams before shutting down old ones.
We emit no-more-pads after we add new streams and emit EOS before
removing old ones.
Also cleanup/refactor a bit more of the code accordingly
We in fact get the size of the header (including stuffing bytes), therefore
use that instead of trying to skip 0xff bytes ourselves since some media
streams do start with 0xff (like mpeg audio's initial 0xfff).