We had a problem with negotiation of the framerate.
Gstreamer was querying the FRAMEINTERVALS based on the max frame size
instead of the desired frame size.
This was resulting in non-negotiated errors when trying to run with a
smaller frame size and fps higher than the max for the max image size.
Fx the max framerate for 1024x1024 RGB on CMOSIS4000 is 28.292
While for 1024x100 RGB it is 280.867
But Gstreamer would allow any framerates bigger than 28.292 no matter
the frame size used...
I have fixed it by 1st changing the CAPS query to use the minimum frame
size instead of maximum.
This however has the downside of allowing gstreamer to negotiate
framerates that are too high if the image size is bigger than the
minimum.
This is not a huge problem since our driver just CLAMPS the fps value to
the max then.
However gstreamer was not being properly notified of this change, and
would therefore report a wrong fps in the CAPS structure.
Note that the fps would be correct inside the buffer info.
Since gstreamer was reading the fps back after setting it.
It was just not being "propagated" to the CAPS structure.
I have also added a WARNING to this point so we can see if the fps that
gstreamer tries to apply was accepted or not.
And the next part of the fix was to add a framerate check after the
frame size has been established.
I did this inside the fixate_caps function of the v4l2src, which was
calling the TRY_FMT in order to check if the format was correct.
So I just added a check for the ENUM_FRAMEINTERVALS in there.
And now we get the non-negotiated again if the fps is too high for the
selected frame size.
Also added a couple of warnings so it is easy to see that this was the
cause.
See:
https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3037
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7850>
Instead of the libc ceil() and pow() machinery for double types, since the
library uses it for unsigned integers use a simple math function for for ceil
division and bit left shift for integer power of two.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7869>
But also don't wait for a buffer on both pads, which might take forever in case
of gaps in one of the streams.
The muxer can only advance the time if it has a timestamped buffer that can be
output, otherwise it will just busy-wait and use up a lot of CPU.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7871>
This is a custom mapping. There isn't much needed apart from that to store vp9
in mpeg-ts since the bitstream is self contained.
Since there are no official specification we don't want people to be mistaken in
believing that. Therefore that mapping is only used in the muxer if the (new)
property `enable-custom-mappings` is set to TRUE.
* The MPEG-TS Stream Type is Private Data (0x6) with the registration descriptor
set to `VP09`.
* The Access Unit are VP9 frames stored in PES packets
* As there is no emulation prevention byte in VP9 elementary stream, the can be
misdetection of PES start code. To avoid this, the start of a PES packet must
be signalled using the Payload Unit Start Indicator in the transport packet
header
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7707>
When encoding an image to mpeg2 video, with something like:
gst-launch-1.0 encodebin name=e profile=mpegpsmux:video/mpeg,mpegversion=2,systemstream=false ! \
filesink location=sample.mpg filesrc num-buffers=1 blocksize=$(stat -c%s sample.png) \
location=sample/dts.png ! pngdec ! e.
The only frame's type is set to an invalid value 0
The consequence is that mpegvideoparse sets the delta unit flag on the buffer because
it is not an I frame, then decodebin3 drops this only frame because the delta
unit flag is set and the decoder receives eos before it was able to receive any
encoded data
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7832>
Some special videos with mlv fourcc can't be recognized by
qtdemux when the subtype of the video is vide instead of
m1v, and will cause negotiation error in subsequent plugin.
So make the handle in qtdemux_video_caps. It might be better
than nothing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7855>
Variable f1 is never used, so just skip that loop for now.
The test has never actually tested actual resampling because of
that bug it seems, and the test fails if fixed to actually resample.
For now we just avoid the pointless 126*12 pipelines that were just
testing the same thing (nothing) over and over again.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7827>
Tensor can be row or col major, but it's also possible that the order by we need
to read the tensor with more than two dimension need to be described. The
reserved field in GstTensorDim is there for this purpose. If we need this we
can add GST_TENSOR_DIM_ORDER_INDEXED, and follow an index defining order for
each dimension.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000>
GstTensor contained two fields (data, dims) that were dynamicallay allocated. For
data it's for a GstBuffer and we have pool for efficient memory management. For
dims it's a small array to store the dimension of the tensor. The dims field
can be allocated inplace by moving it at the end of the structure. This will
allow a better memory management when GstTensor is stored in an analytics meta
which will take advantage of the _clear interface for re-use.
- New api to allocate and free GstTensor
To continue to support use-cases where GstTensor is not stored in an
analytics-meta we provide gst_tensor_alloc, gst_tensor_alloc_n and
gst_tensor_free that will facilitate memory management.
- Make GstTensor a boxed type
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000>
Adding an option to keep them no matter what.
Log files are often pretty large and keeping them around can be annoying,
usually people won't look at logs files for passing tests, and we do not
even print them out.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7700>
Only in LTC mode we introduce additional latency that is depending on only on a
property and not on the framerate, so waiting for the framerate is not necessary.
In all other modes no latency is introduced at all and the latency query can
simply be proxied.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7831>
There is no point in having an endian marker on 8 bit bayer format names since
it is just one byte. Thus remove it.
This also fixes an incompatibility with plugins bad where there is no endian
marker on 8 bit bayer format names as well.
Fixes: #3729
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7826>
The `reuse` property end up setting the SO_REUSEADDR socket option for
the UDP socket. This setting have surprising effects.
On Linux systems the man page (`socket(7)`) states:
```
SO_REUSEADDR
Indicates that the rules used in validating addresses supplied
in a bind(2) call should allow reuse of local addresses. For
AF_INET sockets this means that a socket may bind, except when
there is an active listening socket bound to the address.
```
But since UDP does not listen this ends up meaning that when an
ephemeral port is allocated (setting the `port` to `0`) the kernel is
free to reuse any other UDP port that has `SO_REUSEADDR` set.
Tests checking the likelyhood of port conflict when using multiple
`udpsrc` shows port conflicts starting to occur after ~100-300 udpsrc
with port allocation enabled. See issue #3411 for more details.
Changing the default value of a property is not a small thing we risk
breaking application that rely on the current default value. But since
the effects of having `reuse` default `TRUE` on can also have damaging
and hard-to-debug consequences, it might be worth to consider.
Having `SO_REUSEADDR` enabled for multicast, might have some use cases
but for unicast, with dynamic port allocation, it does not make sense.
When not using an multicast address we will disable port reuse if the
`port` property is set to 0 (=allocate) and warn the user that we did
so.
Closes#3411
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7841>
Combine the appsrc and appsink settings into one place and ensure that
the appsrc will output a TIME segment, to avoid incorrect segment format
criticals in some situations.
The D3D11 path was already setting the segment format correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7839>
The gst_dep.get_variable('libexecdir') may fail in some scenarios
(e.g. building a module alone inside an uninstalled devenv) and
it shouldn't really be reached in the first place if docs are
disabled via options.
Also to avoid confusing meson messages when cross-compiling or
doing a static build.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7818>
This commit adds a Find Module implementing the necessary logic to link
against GStreamer, while implementing some extra bits to enhance the
compatibility.
The first addition is the `mobile` target, which implements the
monolithic `gstreamer_android` library, and which here gains
compatibility with Apple's operating systems.
The second addition is the handling of the basic GStreamer libraries as
`--whole-archive` when statically linked, which was ported from the
ndk-build project in Cerbero. This is not easy to do, as CMake suffers
from several issues that impede its proper usage of pkg-config:
- It cannot differentiate between system/compiler specific libraries
e.g. `-lm`, `-ldl`, but especially `-framework Cocoa`.
- It does not support `--whole-archive` natively until 3.27
- It attempts to reorder flags blindly by separating them with spaces,
thus requiring the use of `-Wl,` wrapping or (in the case of Apple
frameworks) manual framework lookup
The third addition is the port of the Fontconfig and ca-certificates
bundling logic.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6881>
The calculated position was off. I'm not sure of the exact cause;
possibly because we're in AU-aligned byte-stream mode, which means
`transform` is true.
Replacing the math that calculates the NALU positions with code more
similar to what is already in use for `idr_pos` seems to have fixed it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7318>
We do not own any ref to queries when running them.
If we end up processing the query from the streaming thread, it means that it was
a serialized query, and the query is being waited to be processed on the sinkpad
streaming thread, thread which owns the reference.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7767>
Add missing pixel format constants, and mappings for
P010, packed variants of 420 and RGBA layouts to GStreamer
buffer formats. This fixes problems with android decoders
refusing to output raw video frames with decoders that
announce support for these common pixel formats and
only allowing the 'hardware surfaces output' path.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7743>
Previously the wrapping of the 24-bit reference time was not handled
correctly when transforming it into GstClockTime. Given the unit of 64ms
the span that could be represented by 24 bits is 12 days and depending
on the start value we could get a wrapping problem anytime within this
time frame. This turned out to be particularly problematic for the GCC
algorithm in gst-plugins-rs which tried to evict old packages based on
the "oldest" timestamp, which due to wrapping problems could be in the
future. Thus, the container managing the packets could grow without
limits for a long time thereby creating both CPU and memory problems.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7527>
Don't assume that video/x-raw caps means buffers are mappable
or can be processed by videoconvert and friends. Only plug
those converters for real system memory, and treat other
memory capsfeatures as hardware surfaces
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7741>
Some file format standards don't require mastering-display-info
and content-light-level values to be provided.
Decklink however requires the static HDR metdata for the PQ transfer
function which we may not have.
CTA-861-G mentions that in this case, 0 may provided as an 'unknown'
value which is what we use here.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7742>
* Consistently name parse functions according to their message type and
deprecate the misnamed ones,
* Add missing parse functions,
* Check for the correct message type when parsing
* Use correct field name for warning message details
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7754>
If a stream has an 'irregular' frame rate (e.g. metadata) RTCP SR
may be generated way too early, before the RTPSource has received
the first packet after Latency was configured in the pipeline.
We skip such RTPSources in the RTCP generation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7740>
This is documented as:
> you can query how many buffers are queued by reading the
> #gstqueue:current-level-buffers property. you can track changes
> by connecting to the notify::current-level-buffers signal (which
> like all signals will be emitted from the streaming thread). the same
> applies to the #gstqueue:current-level-time and
> #gstqueue:current-level-bytes properties.
... but was not implemented.
This also respects the `notify::silent` property for the notify signals
to be less intrusive.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7486>
Instead of registering the whole list of formats associated to a chroma, our
experience with GstVA tells that entry points only handles one color format per
supported chroma, and they are reflected in the static table.
This avoids exposing unsupported color formats for negotiation.
Fixes: #3914
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7708>
There is the possibility than an element/code/helper creates an identical
`GstStream` (same type and stream-id) instance instead of re-using a previous
one.
For those cases, when detecting whether a `GstStream` is already present in a
collection, we need to do more checks than just comparing the pointer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7716>
If we can't get the current caps when receiving a stream-start, that's fine,
they can/will be provided by other means at a later time.
What we definitely should not do is provide the starting caps of the chain,
which are potentially completely different from the end ones (like for example
`application/x-rtp`)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7716>
If an encoder supports multiple codecs (a bin wrapping/auto-plugging encoders)
then its src pad template caps might list the supported codecs. Without this
patch the selected parser would be the one corresponding to the first codec,
leading to caps negotiation error later on. The proposed fix is to check the
media type on the parser candidates sink pad templates according to the
requested encoded format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7670>
The diff between compared timestamps might be outside the gint range
resulting in wrong sorting results. This patch corrects that by
comparing the timestamps and then returning -1, 0 or 1 depending on the
result.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7726>
The wraparound handling code assumes that the PCR gets updated regularly for
being able to detect wraparounds. With ignore-pcr=true that was not the case and
it stayed initialized at 1h forever.
To avoid this problem, update the fake PCR whenever the PTS advanced by more
than 5s, and also detect wraparounds in these fake PCRs.
Problem can be reproduced with
$ gst-launch-1.0 videotestsrc pattern=black ! video/x-raw,framerate=1/5 ! \
x264enc speed-preset=ultrafast tune=zerolatency ! mpegtsmux ! \
tsdemux ignore-pcr=true ! fakesink
which restarts timestamps at 0 after around 26h30m.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7588>
Some servers (e.g. Axis cameras) expect the client to propose the encryption
key(s) to be used for SRTP / SRTCP. This is required to allow re-keying so
as to evade cryptanalysis. Note that the behaviour is not specified by the
RFCs. By setting the 'client-managed-mikey-mode' property to 'true', rtspsrc
acts as follows:
* For a secured profile (RTP/SAVP or RTP/SAVPF), any media in the SDP
returned by the server for which a MIKEY key management applies is
elligible for client managed mode. The MIKEY from the server is then
ignored.
* rtspsrc sends a SETUP with a MIKEY payload proposed by the user. The
payload is formed by calling the 'request-rtp-key' signal for each
elligible stream. During initialisation, 'request-rtcp-key' is also
called as usual. The keys returned by both signals should be the same
for a single stream, but the mechanism allows a different approach.
* The user can start re-keying of a stream by calling SET_PARAMETER.
The convenience signal 'set-mikey-parameter' can be used to build a
'KeyMgmt' parameter with a MIKEY payload.
* After the server accepts the new parameter, the user can call
'remove-key' and prepare for the new key(s) to be served by signals
'request-rtp-key' & 'request-rtcp-key'.
* The signals 'soft-limit' & 'hard-limit' are called when a key
reaches the limits of its utilisation.
This commit adds support for:
* client-managed MIKEY mode to srtpsrc.
* Master Key Index (MKI) parsing and encoding to GstMIKEYMessage.
* re-keying using the signals 'set-mikey-parameter' & 'remove-key' and
then by serving the new key via 'request-rtp-key' & 'request-rtcp-key'.
* 'soft-limit' & 'hard-limit' signals, similar to those provided by srtpdec.
See also:
* https://www.rfc-editor.org/rfc/rfc3830
* https://www.rfc-editor.org/rfc/rfc4567
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7587>
When force-live is TRUE, aggregator will correctly change its state with
NO_PREROLL, but unless something upstream is live did not previously set
live to TRUE on the latency query.
Fix this by or'ing force_live into the result.
Also improve debug
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7718>
Clients that already gotten a signal for synced clock, may rely on
getting the same when marked as corrupted to take appropriate action. So
send clock signal indicating no sync at identified corrupted state.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7664>
This allows the stream to drive the buffers submitted to the display server.
If the application does not receive frame events for a period of time due to
minimization or tty switch for example, instead of waiting to process and
then catching up when frame events resume, the stream will resume instantly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7691>
There is no requirement for a base DRM format to be supported by libgstvideo
in order to be uploaded to. Don't limite to DRM fourcc that have a libgstvideo
format mapping. This notably enabled AFBC support, which uses an opaque based
format that does not have a linear definition. This also adds R8/RG88 and
simimlar other formats that are not yet mapped in libgstvideo.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7689>
When the stream resolution change it is needed to negotiate
a new pools and to update the caps.
Resolution change could occurs on a new sequence or a new
picture so move resolution change detection code in a common
function.
For memory allocation reasons, only allows resolution change
on non keyframe if the driver support remove buffer feature.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7684>
We must drain the pending output picture so that subclass can renegotiate
the caps. Not doing so while still renegotiating would mean that the
subclass would have to do an allocation query before pushing the caps.
Pushing the caps now without this would also not work since these caps
won't match the pending buffers format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7684>
Add helpers function to call VIDIOC_REMOVE_BUFS ioctl.
If the driver support this feature buffers are removed from the queue when:
- the pool when is detached from the decoded.
- the pool is released.
- allocation failed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7684>
Use VIDIOC_CREATE_BUFS ioctl to create buffers instead of VIDIOC_REQBUFS
because it allows to create buffers also while streaming.
To prepare the introduction of VIDIOC_REMOVE_BUFFERS create
the buffers one per one instead of a range of them. This way
it can, in the futur, fill the holes.
gst_v4l2_decoder_request_buffers() is stil used to remove all
the buffers of the queue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7684>
When a datachannel within a session is removed after proper close,
reference to the error_ignore_bin elements of the datachannel
appsrc/appsink were left in webrtcbin.
This caused the bin-objects to be left and not freed until the whole
webrtc session was terminated. Among other things that includes a thread
from the appsrc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7675>
We want to ensure the stream-collection is present on the pad (as a sticky
event) before we expose the pad.
This is more reliable since it will ensure it is present before any other event
is pushed through.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7609>
- Add mtd_meta_clear to allow specific analytics-meta to handle their clear
operation specific to their type.
- Clear mtd's attached when analytic-meta is freed. When the buffer where
analytics-meta is attached is not from a buffer pool
gst_analytics_relation_meta_clear will not be called unless we explicitly call
it in _free. This important otherwise _mtd_clear are not called and lead to
leak if embedded mtd's allocated memory
- Un-ref in transform if it's a copy
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6026>
FLUSH_STOP is meant to clear the flushing state of pads and elements
downstream, not to process data. Hence, a FLUSH_STOP should not
propagate sticky events. This is also consistent with how flushes are a
special case for probes.
Currently this is almost always the case, since a FLUSH_STOP is
__usually__ preceded by a FLUSH_START, and events (sticky or not) are
discarded while a pad has the FLUSHING flag active (set by FLUSH_START).
However, it is currently assumed that a FLUSH_STOP not preceded by a
FLUSH_START is correct behavior, and this will occur while autoplugging
pipelines are constructed. This leaves us with an unhandled edge case!
This patch explicitly disables sending sticky events when pushing a
FLUSH_STOP, instead of relying on the flushing flag of the pad, which
will break in the edge case of a FLUSH_STOP not preceded by a
FLUSH_START.
If sticky events are propagated in response to a FLUSH_STOP, the
flushing thread can end up deadlocked in blocking code of a downstream
pad, such as a blocking probe. Instead, those events should be
propagated from the streaming thread of the pad when handling a
non-flushing synchronized event or buffer.
This fixes a deadlock found in WebKit with playbin3 when seeks occur
before preroll, where the seeking thread ended up stuck in the blocking
probe of playsink:
https://github.com/WebPlatformForEmbedded/WPEWebKit/issues/1367
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7632>