CLAMP checks both if value is '< 0' and '> max'. Value will never be a negative
number since it is an unsigned integer. Removing that check and only checking if
it is bigger than max and setting it appropriately.
Also converting the previous instance of this into MIN() for consistency.
CID 1139793
Some video bitstreams report a too restrictive set of profiles. If a video
decoder was to strictly follow the indicated profile, it wouldn't support that
stream, whereas it could in theory and in practice. So we should relax the
profile restriction for allowing the decoder to get connected with parser.
https://bugzilla.gnome.org/show_bug.cgi?id=739992
CLAMP checks both if y is '< 0' and '> h1'. y will never be a negative number
since it is an unsigned integer. Removing that check and only checking if it
bigger than h1 and setting it to that max approprietaly.
CID 1139792
type is truncated to 0-31 with "& 0x1f", but right after that it is checks if
the value is equivalent to GST_H265_NAL_VPS, GST_H265_NAL_SPS, and
GST_H265_NAL_PPS (which are 32, 33, and 34 respectively). Obviously, this will
never be True if the value is maximum 31 after the truncation.
The intention of the code was to truncate to 0-63.
After further investigation the previous commit is wrong. The code intended to
check if the type is 39 or the ranges 41-44 and 48-55. Just like gsth265parse.c
does. Type 40 would not be complete.
nal_type is the index for a GstH265NalUnitType enum. There are two types of dead
code here:
First, after checking if nal_type is >= 39 there are two OR conditionals that
check if the value is in ranges higher than that number, so if nal_type >= 39
falls in the True branch those other conditions aren't checked and if it falls
in the False branch and they are checked, they will always also be False. They
are redundant.
Second, the enum has a range of 0 to 40. So the checks for ranges higher than 41
should never be True.
Removing this redundant checks.
CID 1249684
Everytime a buffer is being provided from baseparse, we are parsing all the data from the beginning.
But since we would have already parsed some of the data in the previous iterations,
it doesnt make much sense to keep parsing the same everytime.
Hence skipping the data which is already read in previous iterations to improve the parsing performance.
https://bugzilla.gnome.org/show_bug.cgi?id=740058
The image capture mutex and the pad object lock would cause a race
if the pad query was made right when the image probe was running.
The image probe needs the capture mutex and the querying would need
the pad object lock.
It might be racy with the image probe thread as it uses the capture
mutex just like the start-capture handler from camerabin. The
start-capture would be waiting for the source's streaming thread
to stop to be able to set the source state to ready while the
probe would be blocked waiting to acquire the capture mutex.
It causes a deadlock.
Don't rely on core implementation details, which are private and
may change. It's also not needed here, the performance impact is
close to none. Also copy buffer before changing its metadata.
Get rid of some indirections and inefficiencies,
just payload things directly which gives us more
control over what memory is allocated where and
how and makes things much simpler. In particular,
we can now allocate the payload header plus the
GstMemory to represent it in one go.
Get rid of now-useless packetizer struct and just
call internal functions directly. Also remove
version property which is now defunct, not least
because we create the packetizer with the
version in the init function before a version
can be set.
Add function to calculate a payload CRC across multiple memories
so we don't have to merge buffers with multiple memories just to
calculate the CRC. Also make CRC calculation function static,
since it's not used outside dataprotocol.h and move special-casing
of length = 0 -> CRC = 0 into CRC function (from caller).
Perhaps more importantly, since payload CRC is off by default:
don't map buffer (and possibly merge memories in the process)
if we are not going to use it to calculate a CRC anyway.
This can happen if this is a live pipeline and no source produced any buffer
and sent no caps until the an output buffer should've been produced according
to the latency.
When this is TRUE, we really have to produce output. This happens
in live mixing mode when we have to output something for the current
time, no matter if we have enough input or not.