When outputting in AVC3 stream format, the codec_data should not
contain any SPS or PPS, because they are embedded inside the stream.
In case of avc->bytestream h264parse will push the SPS and PPS from
codec_data downstream at the start of the stream, at intervals
controlled by "config-interval" and when there is a codec_data change.
In the case of avc3->bytstream h264parse detects that there is
already SPS/PPS in the stream and sets h264parse->push_codec to FALSE.
Therefore avc3->bytstream was already supported, except for the stream
type.
In the case of bystream->avc h264parse will generate codec_data caps
from the parsed SPS/PPS in the stream. However it does not remove these
SPS/PPS from the stream. bytestream->avc3 is the same as bytestream->avc
except that the codec_data must not have any SPS/PPS in it.
|--------------+-------------+-------------------|
|stream-format | SPS in-band | SPS in codec_data |
|--------------+-------------+-------------------|
| avc | maybe | always |
|--------------+-------------+-------------------|
| avc3 | always | never |
|--------------+-------------+-------------------|
Amendment 2 of ISO/IEC 14496-15 (AVC file format) is defining a new
structure for fragmented MP4 called "avc3". The principal difference
between AVC1 and AVC3 is the location of the codec initialisation
data (e.g. SPS, PPS). In AVC1 this data is placed in the initial MOOV box
(moov.trak.mdia.minf.stbl.stsd.avc1) but in AVC3 this data goes in the
first sample of every fragment.
https://bugzilla.gnome.org/show_bug.cgi?id=702004
While it was a great idea, various g-i based bindings don't support
GArray with entries greater than sizeof(gpointer) :(
So let's just make everybody happy by just using GPtrArray.
And since we're breaking the API, also rename the various descriptor fields
to no longer have the descriptor_ prefix.
It does cost a bit more in terms of memory/cpu usage, but makes it usable
from bindings.
Do state changes from sink to src. Fixes race condition in
pull mode test where the source will start up and push buffers
to queue/identity or aiffparse before the main thread has
managed to set them to playing yet.
These are the values officially registered in the base specification
(H.222.0/13818-1). Later on we can add other enums for other variants
Note that the enum is not used in the structure fields (such as a pmt
stream stream_type field) since it can contain values from other
variants.
Serves as an example of usage of the new mpegts library from an
application.
Will parse/dump all sections received on a bus.
Usage is ./tsparse <any gst-launch line using tsdemux or tsparse>
Examples:
./tsparse file:///some/mpegtsfile ! tsparse ! fakesink
./tsparse dvb://CHANNEL ! tsparse ! fakesink
./tsparse playbin uri=dvb://CHANNEL
./tsparse playbin uri=file:///some/mpegtsfile
...
https://bugzilla.gnome.org/show_bug.cgi?id=702724