When iterating sink pads to collect some data, we should take the stream lock so
we don't get stale data and possibly deadlock because of that. This fixes
a definitive deadlock in _wait_and_check() that manifests with high max
latencies in a live pipeline, and fixes other possible race conditions.
It might be racy with the image probe thread as it uses the capture
mutex just like the start-capture handler from camerabin. The
start-capture would be waiting for the source's streaming thread
to stop to be able to set the source state to ready while the
probe would be blocked waiting to acquire the capture mutex.
It causes a deadlock.
Don't rely on core implementation details, which are private and
may change. It's also not needed here, the performance impact is
close to none. Also copy buffer before changing its metadata.
Get rid of some indirections and inefficiencies,
just payload things directly which gives us more
control over what memory is allocated where and
how and makes things much simpler. In particular,
we can now allocate the payload header plus the
GstMemory to represent it in one go.
Get rid of now-useless packetizer struct and just
call internal functions directly. Also remove
version property which is now defunct, not least
because we create the packetizer with the
version in the init function before a version
can be set.
Add function to calculate a payload CRC across multiple memories
so we don't have to merge buffers with multiple memories just to
calculate the CRC. Also make CRC calculation function static,
since it's not used outside dataprotocol.h and move special-casing
of length = 0 -> CRC = 0 into CRC function (from caller).
Perhaps more importantly, since payload CRC is off by default:
don't map buffer (and possibly merge memories in the process)
if we are not going to use it to calculate a CRC anyway.
If typefind fails, check to see if the buffer is too short for typefind. If this is the case,
prepend the decrypted buffer to the pending buffer and try again the next time around.
https://bugzilla.gnome.org/show_bug.cgi?id=740458
Segment start needs only to be updated when starting the streams
or after a seek, doing it during bitrate changes will cause the
running time to go discontinuous (jump back to a previous ts)
and QOS will drop buffers
This can happen if this is a live pipeline and no source produced any buffer
and sent no caps until the an output buffer should've been produced according
to the latency.
This simplifies the code and also makes sure that we don't forget to check all
conditions for waiting.
Also fix a potential deadlock caused by not checking if we're actually still
running before starting to wait.
... and hope that everything will be fine. This shouldn't really happen but
previously happened during shutdown. It should be fixed in videoencoder now,
but better be on the safe side here.
Actually we should always recalculate buffer size since our buffer size
even when not-padded is smaller for many sub-sampled formats. This is
because we don't add padding between the planes.
https://bugzilla.gnome.org/show_bug.cgi?id=740900
Problem was that if buffer was mapped READWRITE (state of buffers from
libav right now), mapping it READ/GL will not upload. This is because the
flag is only set when the buffer is unmapped. We can fix this by setting
the flags in map. This result in already mapped buffer that get mapped
to be read in GL will be uploaded. The problem is that if the write
mapper makes modification afterward, the modification will never get
uploaded.
https://bugzilla.gnome.org/show_bug.cgi?id=740900