Keep a list of current global tags around and push them
whenever a new stream is started. Also convert all stream
specific tags to global as they are stream specific for
the container, so they are global for the streams from
within that container.
https://bugzilla.gnome.org/show_bug.cgi?id=644395
The PAT is related to the stream, we therefore want it cleared along
with anything stream related.
This commented section was from the (old) mpegtsparse and *might* have
been related to speeding up DVB start-up. But we have another plan for that.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=724716
The requested TS might be beyond the last observed PCR. In order to calculate
a coherent offset, we need to use the last and previous-to-last groups.
https://bugzilla.gnome.org/show_bug.cgi?id=721035
The muxer is now able to include DVB sections in the transport stream.
The si-interval property will determine how often the SI tables are
muxed into the stream.
The section is handled by the mpeg-ts library. Below is a small example
that will include a Netork Information Table with a Network Name
descriptor in the stream.
GstMpegTsNIT *nit;
GstMpegTsDescriptor *descriptor;
GstMpegTsSection *section;
GstElement *mpegtsmux;
gst_mpegts_initialize ();
nit = gst_mpegts_section_nit_new ();
nit->actual_network = TRUE;
descriptor = gst_mpegts_descriptor_from_dvb_network_name ("Network name");
g_ptr_array_add (nit->descriptors, descriptor);
section = gst_mpegts_section_from_nit (nit);
// mpegtsmux should be retrieved from the pipeline
gst_mpegts_section_send_event (section, mpegtsmux);
gst_mpegts_section_unref (section);
The original code (old mpegtsparse) from which this plugin was based on
was dual-licensed. This allowed usage of the code under any of the
licenses (which including LGPL):
"""
* Alternatively, the contents of this file may be used under the terms of
* the GNU Lesser General Public License Version 2 or later (the "LGPL"),
* in which case the provisions of the LGPL are applicable instead
* of those above. If you wish to allow use of your version of this file only
* under the terms of the LGPL, and not to allow others to
* use your version of this file under the terms of the MPL, indicate your
* decision by deleting the provisions above and replace them with the notice
* and other provisions required by the LGPL. If you do not delete
* the provisions above, a recipient may use your version of this file under
* the terms of the MPL or the LGPL.
"""
When refactored (leading to the creation of this new plugin), I chose all
new code to be LGPL-only (which was allowed for pre-existing code) by removing
the MPL sections.
The headers were all updated, but not the plugin license field. This commit
fixes this.
In order to be able to change the caps on multiple capsfilters the
source element needs to be stopped, otherwise it will get a few
reconfigure events and might try to renegotiate while the bin
is still transitioning its caps, leading to a not-negotiated failure
and the image capture won't happen because the source will be
unusable.
The solution is to keep the source in paused while the caps are being
changed in the bin, and then bring the element back to playing once
it is done. Unfortunately this increases the image capture latency,
but it should always work.
A possible improvement to reduce the latency is to add another signal
to be called before 'start-capture': 'prepare-capture'. At this step
the camera source should set all caps it needs and get the source
ready for doing the capture as soon as 'start-capture' is called.
This can be done on a future commit
* stream-start-id is mandatory at the beginning, so add that to the
gdp headers
* caps must be sent before new segment, invert the order from legacy
0.10 code
And fix the tests as a ref is now kept for those buffers that compose
the header
It is not perfect but it allows us to be sure that the mandatory 'framerate'
field is present in the caps.
As soon as some information is found in the stream, that will be
updated.
https://bugzilla.gnome.org/show_bug.cgi?id=723243
An SEI RBSP could contains more than one SEI message as specified in
7.4.2.3.1.
This commit change the parser API: the gst_h264_parser_parse_sei()
function now create and fill a GArray containing GstH264SEIMessage.
https://bugzilla.gnome.org/show_bug.cgi?id=721715
If the first buffer that we handle for a stream has no timestamp, we
would never consider this pad again for muxing which causes queues to
fill up and pipelines to stall. Instead, try to mux pads with -1
timestamps as soon as possible.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=722330
mpeg4videoparse might not push buffers while parsing. If those buffers
contain the DISCONT flag, it gets lost and downstream won't get any
buffer with the flag.
Fix it by adding the DISCONT to the next pushed buffer.
This makes backwards playback work.
Collectpads assumes that it can pass any buffer to the clip function
for adjustment, some of which are artificially injected - so don't
adjust global timestamp tracking there. Instead, only adjust the
buffer timestamps and use them directly in the collection function.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=698748
From ETSI EN 300 743 V1.3.1 (2006-11) 7.2.1 Display definition segment specifictations
the parameters of display window are in this order: Xmin, Xmax, Ymin, Ymax.
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Signed-off-by: Pierre-Yves Mordret <pierre-yves.mordret@st.com>
https://bugzilla.gnome.org/show_bug.cgi?id=720382
The perspective plugin applies a 2D perspective (also called projective)
transform to the frame buffer.
A perspective transform can be used for instance to perform keystone
correction when playing the content with a video projector.
https://bugzilla.gnome.org/show_bug.cgi?id=710810
Conversion to byte-stream/nal crashes without that because the
baseparse frame of all NALUs is finished for the first NALU, then
used again for parsing the second NALU. Just that now the buffer
of the frame is already gone. Instead we create temporary frames
for every NALU.
In case more data than a start code alone is needed to decide whether
it ends a frame, arrange for more input data and decide when available.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=711627
When the input buffer is empty and we need more data to determine
whether or not to terminate the previous frame, the last start code
location needs to be set to 4 bytes before the the current position
(size of start_code is 32-bits)
https://bugzilla.gnome.org/show_bug.cgi?id=711627
Force filesink to null before posting video-done to make sure the
file was closed.
Had to do it from a separate thread to avoid calling state_change from
a sync message handler.
https://bugzilla.gnome.org/show_bug.cgi?id=709373
When the frame buffer is AYUV writing all zeros does not set it to
black, in YUV colorspace 0x10 is the black level for luminance and 0x80
is the black level for chrominance.
Fix setting the background to black when the out_frame format is AYUV;
in all the other supported formats zeroing the data with memset is still
the right thing to do.
https://bugzilla.gnome.org/show_bug.cgi?id=710392
the initial par_n = par_d = 0; was always overwritten since the switch/case
handles all values
And remove the 0 case (it's the same handling as default)
liveadder sometimes calculates the offsets incorrectly before adding. The
resulting errors can easily be heard when mixing silence with a sine.
I'm not sure what the exact conditions are to trigger this, but it definitively
happens when the buffers of two streams have a different duration and buffer
length and duration don't match exactly for one stream because of rounding
errors (e.g. duration=0:00:00.021333333)
I have to admit, I got lost in the math somewhere but it seems that not
rounding in gst_live_adder_length_from_duration() causes 1 sample overlaps in
consecutive buffers from the same stream.
When using gst_util_uint64_scale_int_round() instead of just truncating the
sine sound correctly again.
https://bugzilla.gnome.org/show_bug.cgi?id=708345
It is quite possible that we might get PTS/DTS before the first
PCR/Offset observation.
In order to end up with valid timestamp we wait until at least one
stream was able to get a proper running-time for any PTS/DTS.
Until then, we queue up the pending buffers to push out.
Once we see a first valid timestamp, we re-evaluate the amount of
running-time elapsed (based on returned inital running-time and amount
of data/DTS queued up) for any given stream.
Taking the biggest amount of elapsed time, we set that on the packetizer
as the initial offset and recalculate all pending buffers running-time
PTS/DTS.
Note: The buffer queueing system can also be used later on for the
dvb fast start proposal (where we queue up all stream packets before
seeing PAT/PMT and then push them once we know if they belong to the
chosen program).
This allows:
* Better duration estimation
* More accurate PCR location
* Overall more accurate running-time location and calculation
Location and values of PCR are recorded in groups (PCROffsetGroup)
with notable PCR/Offset observations in them (when bitrate changed
for example). PCR and offset are stored as 32bit values to
reduce memory usage (they are differences against that group's
first_{pcr|offset}.
Those groups each contain a global PCR offset (pcr_offset) which
indicates how far in the stream that group is.
Whenever new PCR values are observed, we store them in a sliding
window estimator (PCROffsetGroupCurrent).
When a reset/wrapover/gap is detected, we close the current group with
current values and start a new one (the pcr_offset of that new group
is also calculated).
When a notable change in bitrate is observed (+/- 10%), we record
new values in the current group. This is a compromise between
storing all PCR/offset observations and none, while at the same time
providing better information for running-time<=>offset calculation
in VBR streams.
Whenever a new non-contiguous group is start (due to seeking for example)
we re-evaluate the pcr_offset of each groups. This allows detecting as
quickly as possible PCR wrapover/reset.
When wanting to find the offset of a certain running-time, one can
iterate the groups by looking at the pcr_offset (which in essence *is*
the running-time of that group in the overall stream).
Once a group (or neighbouring groups if the running-time is between two
groups) is found, once can use the recorded values to find the most
accurate offset.
Right now this code is only used in pull-mode , but could also
be activated later on for any seekable stream, like live timeshift
with queue2.
Future improvements:
* some heuristics to "compress" the stored values in groups so as to keep
the memory usage down while still keeping a decent amount of notable
points.
* After a seek compare expected and obtained PCR/Offset and if the
difference is too big, re-calculate position with newly observed
values and seek to that more accurate position.
Note that this code will *not* provide keyframe-accurate seeking, but
will allow a much more accurate PCR/running-time/offset location on
any random stream.
For past (observed) values it will be as accurate as can be.
For future values it will be better than the current situation.
Finally the more you seek, the more accurate your positioning will be.
The previous code could enter an infinite loop because the adapter state
could get out of sync with its mapped data state after sync was lost.
The code was pretty confusing so it's been rewritten to be clearer.
The easiest way to reproduce the infinite loop is to use the breakmydata
element before tsdemux to trigger a resync.
https://bugzilla.gnome.org/show_bug.cgi?id=708161
When outputting in AVC3 stream format, the codec_data should not
contain any SPS or PPS, because they are embedded inside the stream.
In case of avc->bytestream h264parse will push the SPS and PPS from
codec_data downstream at the start of the stream, at intervals
controlled by "config-interval" and when there is a codec_data change.
In the case of avc3->bytstream h264parse detects that there is
already SPS/PPS in the stream and sets h264parse->push_codec to FALSE.
Therefore avc3->bytstream was already supported, except for the stream
type.
In the case of bystream->avc h264parse will generate codec_data caps
from the parsed SPS/PPS in the stream. However it does not remove these
SPS/PPS from the stream. bytestream->avc3 is the same as bytestream->avc
except that the codec_data must not have any SPS/PPS in it.
|--------------+-------------+-------------------|
|stream-format | SPS in-band | SPS in codec_data |
|--------------+-------------+-------------------|
| avc | maybe | always |
|--------------+-------------+-------------------|
| avc3 | always | never |
|--------------+-------------+-------------------|
Amendment 2 of ISO/IEC 14496-15 (AVC file format) is defining a new
structure for fragmented MP4 called "avc3". The principal difference
between AVC1 and AVC3 is the location of the codec initialisation
data (e.g. SPS, PPS). In AVC1 this data is placed in the initial MOOV box
(moov.trak.mdia.minf.stbl.stsd.avc1) but in AVC3 this data goes in the
first sample of every fragment.
https://bugzilla.gnome.org/show_bug.cgi?id=702004
The prog-map property of mpegtsmux only allows you to group pids together in a program.
The program number set in the PAT/PMT tables cannot be set explicitly.
This patch will set the program number according to the prog-map.
If a program id of 0 is given, the first vacant program number starting from 1 will be used.
https://bugzilla.gnome.org/show_bug.cgi?id=697239