We had two issues with the previous code:
1) We were badly handling PUSI-flagged packets. We were discarding the
initial data (if pointer != 0) whereas we should have been accumulating
it with the previous data (if there was a continuity of course).
=> First series of information loss
2) We were not checking whether there were more sections after the end
of one (i.e. when the following byte was not a stuff byte).
This fixes those two issues.
Fixes#677443https://bugzilla.gnome.org/show_bug.cgi?id=677443
Until now we simply ignored those streams (since we couldn't do anything
with it anyway). Now that we have the mpegts library and we offload the
section handling to the application side we can properly identify and
extract them.
By default it is disabled for tsparse and enabled for tsdemux, but there is
a property to change that.
This should open the way to properly handle all private section streams,
including:
* DSM-CC
* MHEG
* Carousel data
* Metadata streams (though I haven't seen any of those in the wild)
* ... And all other specs/protocols making use of those
Partially fixes#560631
Migrate the code to use the new parser API based on GstMpegVideoPacket.
Also try to optimize gst_mpegv_parse_process_config() by using more of
GstMpegVideoPacket and determining the extension_start_code_identifier
prior to calling the parser function for that extension packet.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Since we now send all sections to the packetizer, we no longer need to do
anymore in-depth checks for the validity of a section.
The choice boils down to:
1) Is it from a known PES pid ? If so pass it on (which might be just pushing
downstream in the case of tsparse, or accumulating PES data for tsdemux)
2) Is it from a known SI pid ? If so pass it to the section packetizer
We still have some other stream types which haven't been ported, but
we will do so once we have defined the enums in the mpegts library.
Also add some FIXMEs regarding items discovered during analysis
* Only mpeg-ts section packetization remains.
* Improve code to detect duplicated sections as early as possible
* Add FIXME for various issues that need fixing (but are not regressions)
https://bugzilla.gnome.org/show_bug.cgi?id=702724
We use add_stream(stream_type:-1) to ensure a programs' PCR Stream is
also taken into account. For most programs this will re-use an
existing ES stream.
So only warn that we are re-adding a stream if it was already present
AND it is not to ensure the PCR stream is taken into account.
Only create subtables when needed. It was previously creating one every
single time ... to check if one was present.
And speed up code to detect whether a subtable was already present or not.
Overall makes section pushing 2 times faster.
In some cases (NIT on highly-populated DVB-C operator for example), there
will be more than one section emitted for the same subtable and version
number.
In order not to lose those updates for the same version number, we checked
against the CRC of the previous section we parsed.
The problem is that, while it made sure we didn't lose any information, it
also meant that if the same section came back (same version, same CRC) later
on we would re-process it, re-parse it and re-emit it.
This version improves on that by keeping a list of previously observed CRC
for identical PID/subtable/version-number and will only process sections if
they really were never seen in the past (as opposed to just before).
On a 30s clip, this brings down the number of NIT section parsing from 4541
down to 663.
https://bugzilla.gnome.org/show_bug.cgi?id=614479
First send stream-start, then caps, then segment.
The segment we push is from upstream in push-mode. If we work in pull-mode
then we initialize the base segment to BYTES.
https://bugzilla.gnome.org/show_bug.cgi?id=702422
Sync byte scan is incorrect for M2TS streams because the timestamp 4
bytes were not included in the flush size. This can result in an
infinite loop.
Rework the scan code to be clearer and work in all cases.
By adding the video-source-filter during construction time, rather then
patching it in later (*), we can greatly reduce the amount of caps involved
in negotation, speeding up pipeline creation.
I wrote this while working on speeding up the startup of cheese. My cheese
has been modified to add a capsfilter, filtering for only the configured
resolution, with that cheese patch + this patch, the pipeline creation time
goes from aprox 1.1 seconds to aprox 350ms. This is with a Logitech 9000
pro camera, which supports lots of different resolutions at many different
framerates per resolution, causing a caps "explosion" if not filtered.
*) Note the code for this is left in, as it is still necessary if the
video-source-filter is changed between a stop + re-start.
https://bugzilla.gnome.org/show_bug.cgi?id=701953
check_and_replace_src() was setting self->app_vid_src to NULL, which
means that an app setting the video-source property, and then starting,
stopping and re-starting the pipeline (ie to make changes to the
video-source-filter property) would after the restart no longer have
a video-source.
This patch fixes this by making gst_camerabin_setup_default_element return a
ref to the passed in user_element, rather then returning the user_element as
is, so that that ref can be passed on to the bin, and the app_vid_src ref
stays valid.
https://bugzilla.gnome.org/show_bug.cgi?id=701915
Current fallback to lost_sync seems to impede a delay to restore
sync. Let the parser parse and skip the private stream.
Here it contains the digital camera brand (in 2010 bytes)
and is repeated twice.
https://bugzilla.gnome.org/show_bug.cgi?id=697283
descriptors are stored as a GValueArray of GString. The downside is
that there is no way to "pass" ownership of a GValue to a GValueArray
which previously resulted in expensive copy/free of the (already expensive)
GString.
Here we estimate first the size of the GValueArray, then create it,
then directly use the GValue of that array.
Speeds up total SI parsing by ~30%