We had a problem with negotiation of the framerate.
Gstreamer was querying the FRAMEINTERVALS based on the max frame size
instead of the desired frame size.
This was resulting in non-negotiated errors when trying to run with a
smaller frame size and fps higher than the max for the max image size.
Fx the max framerate for 1024x1024 RGB on CMOSIS4000 is 28.292
While for 1024x100 RGB it is 280.867
But Gstreamer would allow any framerates bigger than 28.292 no matter
the frame size used...
I have fixed it by 1st changing the CAPS query to use the minimum frame
size instead of maximum.
This however has the downside of allowing gstreamer to negotiate
framerates that are too high if the image size is bigger than the
minimum.
This is not a huge problem since our driver just CLAMPS the fps value to
the max then.
However gstreamer was not being properly notified of this change, and
would therefore report a wrong fps in the CAPS structure.
Note that the fps would be correct inside the buffer info.
Since gstreamer was reading the fps back after setting it.
It was just not being "propagated" to the CAPS structure.
I have also added a WARNING to this point so we can see if the fps that
gstreamer tries to apply was accepted or not.
And the next part of the fix was to add a framerate check after the
frame size has been established.
I did this inside the fixate_caps function of the v4l2src, which was
calling the TRY_FMT in order to check if the format was correct.
So I just added a check for the ENUM_FRAMEINTERVALS in there.
And now we get the non-negotiated again if the fps is too high for the
selected frame size.
Also added a couple of warnings so it is easy to see that this was the
cause.
See:
https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3037
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7850>
Instead of the libc ceil() and pow() machinery for double types, since the
library uses it for unsigned integers use a simple math function for for ceil
division and bit left shift for integer power of two.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7869>
But also don't wait for a buffer on both pads, which might take forever in case
of gaps in one of the streams.
The muxer can only advance the time if it has a timestamped buffer that can be
output, otherwise it will just busy-wait and use up a lot of CPU.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7871>
This is a custom mapping. There isn't much needed apart from that to store vp9
in mpeg-ts since the bitstream is self contained.
Since there are no official specification we don't want people to be mistaken in
believing that. Therefore that mapping is only used in the muxer if the (new)
property `enable-custom-mappings` is set to TRUE.
* The MPEG-TS Stream Type is Private Data (0x6) with the registration descriptor
set to `VP09`.
* The Access Unit are VP9 frames stored in PES packets
* As there is no emulation prevention byte in VP9 elementary stream, the can be
misdetection of PES start code. To avoid this, the start of a PES packet must
be signalled using the Payload Unit Start Indicator in the transport packet
header
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7707>
When encoding an image to mpeg2 video, with something like:
gst-launch-1.0 encodebin name=e profile=mpegpsmux:video/mpeg,mpegversion=2,systemstream=false ! \
filesink location=sample.mpg filesrc num-buffers=1 blocksize=$(stat -c%s sample.png) \
location=sample/dts.png ! pngdec ! e.
The only frame's type is set to an invalid value 0
The consequence is that mpegvideoparse sets the delta unit flag on the buffer because
it is not an I frame, then decodebin3 drops this only frame because the delta
unit flag is set and the decoder receives eos before it was able to receive any
encoded data
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7832>
Some special videos with mlv fourcc can't be recognized by
qtdemux when the subtype of the video is vide instead of
m1v, and will cause negotiation error in subsequent plugin.
So make the handle in qtdemux_video_caps. It might be better
than nothing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7855>
Variable f1 is never used, so just skip that loop for now.
The test has never actually tested actual resampling because of
that bug it seems, and the test fails if fixed to actually resample.
For now we just avoid the pointless 126*12 pipelines that were just
testing the same thing (nothing) over and over again.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7827>
Tensor can be row or col major, but it's also possible that the order by we need
to read the tensor with more than two dimension need to be described. The
reserved field in GstTensorDim is there for this purpose. If we need this we
can add GST_TENSOR_DIM_ORDER_INDEXED, and follow an index defining order for
each dimension.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000>
GstTensor contained two fields (data, dims) that were dynamicallay allocated. For
data it's for a GstBuffer and we have pool for efficient memory management. For
dims it's a small array to store the dimension of the tensor. The dims field
can be allocated inplace by moving it at the end of the structure. This will
allow a better memory management when GstTensor is stored in an analytics meta
which will take advantage of the _clear interface for re-use.
- New api to allocate and free GstTensor
To continue to support use-cases where GstTensor is not stored in an
analytics-meta we provide gst_tensor_alloc, gst_tensor_alloc_n and
gst_tensor_free that will facilitate memory management.
- Make GstTensor a boxed type
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000>
Adding an option to keep them no matter what.
Log files are often pretty large and keeping them around can be annoying,
usually people won't look at logs files for passing tests, and we do not
even print them out.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7700>
Only in LTC mode we introduce additional latency that is depending on only on a
property and not on the framerate, so waiting for the framerate is not necessary.
In all other modes no latency is introduced at all and the latency query can
simply be proxied.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7831>
There is no point in having an endian marker on 8 bit bayer format names since
it is just one byte. Thus remove it.
This also fixes an incompatibility with plugins bad where there is no endian
marker on 8 bit bayer format names as well.
Fixes: #3729
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7826>
The `reuse` property end up setting the SO_REUSEADDR socket option for
the UDP socket. This setting have surprising effects.
On Linux systems the man page (`socket(7)`) states:
```
SO_REUSEADDR
Indicates that the rules used in validating addresses supplied
in a bind(2) call should allow reuse of local addresses. For
AF_INET sockets this means that a socket may bind, except when
there is an active listening socket bound to the address.
```
But since UDP does not listen this ends up meaning that when an
ephemeral port is allocated (setting the `port` to `0`) the kernel is
free to reuse any other UDP port that has `SO_REUSEADDR` set.
Tests checking the likelyhood of port conflict when using multiple
`udpsrc` shows port conflicts starting to occur after ~100-300 udpsrc
with port allocation enabled. See issue #3411 for more details.
Changing the default value of a property is not a small thing we risk
breaking application that rely on the current default value. But since
the effects of having `reuse` default `TRUE` on can also have damaging
and hard-to-debug consequences, it might be worth to consider.
Having `SO_REUSEADDR` enabled for multicast, might have some use cases
but for unicast, with dynamic port allocation, it does not make sense.
When not using an multicast address we will disable port reuse if the
`port` property is set to 0 (=allocate) and warn the user that we did
so.
Closes#3411
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7841>
Combine the appsrc and appsink settings into one place and ensure that
the appsrc will output a TIME segment, to avoid incorrect segment format
criticals in some situations.
The D3D11 path was already setting the segment format correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7839>
The gst_dep.get_variable('libexecdir') may fail in some scenarios
(e.g. building a module alone inside an uninstalled devenv) and
it shouldn't really be reached in the first place if docs are
disabled via options.
Also to avoid confusing meson messages when cross-compiling or
doing a static build.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7818>
This commit adds a Find Module implementing the necessary logic to link
against GStreamer, while implementing some extra bits to enhance the
compatibility.
The first addition is the `mobile` target, which implements the
monolithic `gstreamer_android` library, and which here gains
compatibility with Apple's operating systems.
The second addition is the handling of the basic GStreamer libraries as
`--whole-archive` when statically linked, which was ported from the
ndk-build project in Cerbero. This is not easy to do, as CMake suffers
from several issues that impede its proper usage of pkg-config:
- It cannot differentiate between system/compiler specific libraries
e.g. `-lm`, `-ldl`, but especially `-framework Cocoa`.
- It does not support `--whole-archive` natively until 3.27
- It attempts to reorder flags blindly by separating them with spaces,
thus requiring the use of `-Wl,` wrapping or (in the case of Apple
frameworks) manual framework lookup
The third addition is the port of the Fontconfig and ca-certificates
bundling logic.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6881>
The calculated position was off. I'm not sure of the exact cause;
possibly because we're in AU-aligned byte-stream mode, which means
`transform` is true.
Replacing the math that calculates the NALU positions with code more
similar to what is already in use for `idr_pos` seems to have fixed it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7318>
We do not own any ref to queries when running them.
If we end up processing the query from the streaming thread, it means that it was
a serialized query, and the query is being waited to be processed on the sinkpad
streaming thread, thread which owns the reference.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7767>
Add missing pixel format constants, and mappings for
P010, packed variants of 420 and RGBA layouts to GStreamer
buffer formats. This fixes problems with android decoders
refusing to output raw video frames with decoders that
announce support for these common pixel formats and
only allowing the 'hardware surfaces output' path.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7743>
Previously the wrapping of the 24-bit reference time was not handled
correctly when transforming it into GstClockTime. Given the unit of 64ms
the span that could be represented by 24 bits is 12 days and depending
on the start value we could get a wrapping problem anytime within this
time frame. This turned out to be particularly problematic for the GCC
algorithm in gst-plugins-rs which tried to evict old packages based on
the "oldest" timestamp, which due to wrapping problems could be in the
future. Thus, the container managing the packets could grow without
limits for a long time thereby creating both CPU and memory problems.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7527>
Don't assume that video/x-raw caps means buffers are mappable
or can be processed by videoconvert and friends. Only plug
those converters for real system memory, and treat other
memory capsfeatures as hardware surfaces
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7741>
Some file format standards don't require mastering-display-info
and content-light-level values to be provided.
Decklink however requires the static HDR metdata for the PQ transfer
function which we may not have.
CTA-861-G mentions that in this case, 0 may provided as an 'unknown'
value which is what we use here.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7742>