The image capture mutex and the pad object lock would cause a race
if the pad query was made right when the image probe was running.
The image probe needs the capture mutex and the querying would need
the pad object lock.
It might be racy with the image probe thread as it uses the capture
mutex just like the start-capture handler from camerabin. The
start-capture would be waiting for the source's streaming thread
to stop to be able to set the source state to ready while the
probe would be blocked waiting to acquire the capture mutex.
It causes a deadlock.
Don't rely on core implementation details, which are private and
may change. It's also not needed here, the performance impact is
close to none. Also copy buffer before changing its metadata.
Get rid of some indirections and inefficiencies,
just payload things directly which gives us more
control over what memory is allocated where and
how and makes things much simpler. In particular,
we can now allocate the payload header plus the
GstMemory to represent it in one go.
Get rid of now-useless packetizer struct and just
call internal functions directly. Also remove
version property which is now defunct, not least
because we create the packetizer with the
version in the init function before a version
can be set.
Add function to calculate a payload CRC across multiple memories
so we don't have to merge buffers with multiple memories just to
calculate the CRC. Also make CRC calculation function static,
since it's not used outside dataprotocol.h and move special-casing
of length = 0 -> CRC = 0 into CRC function (from caller).
Perhaps more importantly, since payload CRC is off by default:
don't map buffer (and possibly merge memories in the process)
if we are not going to use it to calculate a CRC anyway.
This can happen if this is a live pipeline and no source produced any buffer
and sent no caps until the an output buffer should've been produced according
to the latency.
When this is TRUE, we really have to produce output. This happens
in live mixing mode when we have to output something for the current
time, no matter if we have enough input or not.
Some video bitstreams report a too restrictive set of profiles. If a video
decoder was to strictly follow the indicated profile, it wouldn't support that
stream, whereas it could in theory and in practice. So we should relax the
profile restriction for allowing the decoder to get connected with parser.
https://bugzilla.gnome.org/show_bug.cgi?id=739992
When dealing with random-access content (such as files), we initially
search for the last PCR in order to figure out duration and to handle
other position estimation such as those used in seeking.
Previously, the code looking for that last PCR would search in the last
640kB of the file going forward, and stop at the first PCR encountered.
The problem with that was two-fold:
* It wouldn't really be the last PCR (it would be the first one within
those last 640kB. In case of VBR files, this would put off duration
and seek code slightly.
* It would fail on files with bitrates higher than 52Mbit/s (not common)
Instead this patch modifies that code by:
* Scanning over the last 2048kB (allows to cope with streams up to 160Mbit/s)
* Starts by the end of the file, going over chunks of 300 MPEG-TS packets
* Doesn't stop at the first PCR detected in a chunk, but instead records all
of them, and only stop searching if there was "at least" one PCR within
that chunk
This should improve duration reporting and seeking operations on VBR files
https://bugzilla.gnome.org/show_bug.cgi?id=708532
Sometimes rawparse does not handle the seeking query
properly, the rawparse should send the query upstream
first. For example, upstream could support seeking in
TIME format (but not in BYTE format), so the BYTE format
seeking query that rawparse sends in push mode would
fail.
https://bugzilla.gnome.org/show_bug.cgi?id=722764
Read PNG data chunk in one go by letting the parser
base class know the size we need, so that it doesn't
drip-feed us small chunks of data (causing a lot of
reallocs and memcpy in the process) until we have
everything.
Improves parsing performance of very large PNG files
(65MB) from ~13 seconds to a couple of millisecs.
https://bugzilla.gnome.org/show_bug.cgi?id=736176
This commit add an helper to convert a frame to frame-layer format and
use it to implement these two stream-format conversion:
- asf --> sequence-layer-frame-layer
- asf --> frame-layer
In simple/main profile, we basically have a raw frame, so building a
frame layer isn't too complicated. But in advanced profile, the first
frame-layer should contain sequence-header, entrypoint, and frame and
each keyframe should contain entrypoint, so we have to handle these
carefully.
https://bugzilla.gnome.org/show_bug.cgi?id=738526
Add an helper to check that output stream-format is coherent with
profile and header-format. It also check if we know how to do the
conversion if the input stream-format differs from selected
output-format.
So, in case output stream-format is not allowed, it will now fail at
negotiation rather than in pre_push_frame.
https://bugzilla.gnome.org/show_bug.cgi?id=738526
This commit introduces an helper to convert an ASF frame to BDUs format with
startcodes and use this helper to implements following stream-format
conversions:
- asf --> bdu
- asf --> sequence-layer-bdu
- asf --> sequence-layer-raw-frame
https://bugzilla.gnome.org/show_bug.cgi?id=738526
It add the support of following stream-format conversion:
- bdu --> sequence-layer-bdu
- bdu-frame --> sequence-layer-bdu-frame
- frame-layer --> sequence-layer-frame-layer
For these conversion, the only requirements is to push a sequence-layer
buffer prior to data.
https://bugzilla.gnome.org/show_bug.cgi?id=738526
It prepares the template for stream-format conversion and it implements
the following conversion:
- sequence-layer-bdu --> bdu
- sequence-layer-bdu-frame --> bdu-frame
- sequence-layer-frame-layer --> frame-layer
Work is done in the pre_push_frame() method.
https://bugzilla.gnome.org/show_bug.cgi?id=738526
gstinteraudiosrc.c: In function 'gst_inter_audio_src_create':
gstinteraudiosrc.c:339:27: error: variable 'buffer_samples' set but not used [-Werror=unused-but-set-variable]
guint64 period_samples, buffer_samples;
^
The whole not_linked optimisation is really a bit dodgy here, but
let's leave it in place for now and at least start pushing data
again when a pad got linked later, in which case we should get a
RECONFIGURE event.
Current CLAMP checks both if the value is below 0 or above 255. Considering it
is an unsigned value it can never be less than zero, so that comparison is
unnecessary. Switching to using if just for the upper bound.
CID #1139796
Value from left_luminance is assigned to out_luminance here, but that stored
value is not used before it is overwritten in the next cycle of the loop.
Removing assignation.
CID #1226473
As a consequence, tsdemux won't remove its pads anymore on EOS.
Fixes the case when mpegtsbase is not able to process new packets
after EOS as the corresponding pids aren't known anymore because
the programs were removed and the pes/psi were kept, preventing the
PAT to be parsed again.
https://bugzilla.gnome.org/show_bug.cgi?id=738695
It was using a 24000/24000/48000, but I think it meant to use
24000/32000/48000. Not 100% sure...
https://en.wikipedia.org/wiki/G.722.1 has the list of supported
bitrates. It's not clear whether the "flag" code maps to this,
however.
Coverity 206072
This parses the frame_packing_arragement() payload in SEI message.
This information can be used by decoders to appropriately rearrange the
samples which belong to Stereoscopic and Multiview High profiles.
https://bugzilla.gnome.org/show_bug.cgi?id=685215
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Assume that small backward PCR jumps are just from upstream packet
mis-ordering and don't reset timestamp tracking state - assuming that
things will be OK again shortly.
Make the threshold for detecting discont between sequential buffers
configurable and match the smoothing-latency setting on tsparse
to better cope with data bursts.
When the set-timestamps property is set, use PCRs on the provided
(or autodetected) pcr-pid to apply (or replace) timestamps on the
output buffers, using piece-wise linear interpolation.
This allows tsparse to be used to stream an arbitrary mpeg-ts file,
or to smooth jittery reception timestamps from a network stream.
The reported latency is increased to match the smoothing latency if
necessary.