The uvcsink was limited to only transfer YUY2 and MJPEG. For the
uncompressed formats there is no technical reason not to support them.
Since gst_video_format_to_string is already supporting more fourcc than
only YUY2 we use the default path in gst_v4l2uvc_fourcc_to_bare_struct
to create structures for more formats and bail out if the returned
format is not from the uncompressed type.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6037>
A zero-sized box is not really a problem and can be skipped to look at any
possibly following ones.
BMD ATEM devices specifically write a zero-sized bmdc box in the sample
description, followed by the avcC box in case of h264. Previously the avcC box
would simply not be read at all and the file would be unplayable.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7564>
This notably follow the way we order the template and keeps the
format:Interlaced caps at the end. This change also fixes
an early skip check, that would skip if a driver only supports
alternate interlacing for a specific format. It also fixes
a bug where only the last resolution of a discrete frame size
was allowed to use format:Interlaced. Finally, similar to template
caps code, simplify the caps for earch featurs, making the debug output
manageable and (marginally) improve negotiation speed.
This change will make it easier to introduce memory:DMABuf.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7540>
In qml6glsrc, we capture the application by copying the back buffer into
our own FBO. The afterRendering() signal is too soon as from the apitrace, the
application has been rendered into a QT internal buffer, to be used as a cache
for refresh.
Use afterFrameEnd() signal instead. This works with no delay on GLES. With GL
it seems to reduce from 2 to 1 frame delay (this may be platform specific). A
different recording technique would need to be used to completely remove this
delay.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7351>
In fact, the va decoder is just a internal helper class and its access
is under the control of all dec elements. So far, there is no parallel
operation on it now.
At the other side, some code scan tools report race condition issues.
For example, the "context" field is just protected with lock at _open()
but is not protected at _add_param_buffer().
So we just delete all its lock usage.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7547>
Need HAVE_CONFIG_H to avoid build failure on Solaris 11.4 with gcc 14.1:
../subprojects/gstreamer/tests/misc/../../libs/gst/net/gstnetutils.c:71:7:
error: implicit declaration of function ‘setsockopt’
[-Wimplicit-function-declaration]
71 | if (setsockopt (fd, IPPROTO_IP, IP_TOS, &tos, sizeof (tos)) < 0) {
| ^~~~~~~~~~
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7553>
Timestamps are untouched by default, but the new mode can now be enabled to replace RTP timestamps
with ones generated from the buffer PTS. Making it an enum in case different modes are needed in the future.
That allows for a rtpjitterbuffer to do proper drift compensation, so that the stream coming out of gst-rtsp-server
is not drifting compared to the pipeline clock and also not compared to the RTCP NTP times.
Most of the code is borrowed from rtpbasepayload, as it's exactly its behaviour which I wanted to bring here.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7526>
Valgrind complains about uninitialized memory used in an ioctl
Syscall param ioctl(VKI_V4L2_G_TUNER).reserved points to uninitialised byte(s)
at 0x719294F: ioctl (ioctl.c:36)
by 0x3126A817: gst_v4l2_fill_lists (v4l2_calls.c:185)
by 0x3126A817: gst_v4l2_open (v4l2_calls.c:589)
by 0x3123F1C2: gst_v4l2_device_provider_probe_device (gstv4l2deviceprovider.c:122)
by 0x3123F648: gst_v4l2_device_provider_device_from_udev (gstv4l2deviceprovider.c:301)
by 0x3123F998: provider_thread (gstv4l2deviceprovider.c:395)
by 0x796FA50: ??? (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.7200.4)
by 0x710CAC2: start_thread (pthread_create.c:442)
by 0x719DA03: clone (clone.S:100)
Address 0x44008a34 is on thread 11's stack
in frame #1, created by gst_v4l2_open (v4l2_calls.c:524)
Uninitialised value was created by a stack allocation
at 0x3126A024: gst_v4l2_open (v4l2_calls.c:524)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6144>
Be smarter when allocating sink and source memory pools to reduce the
memory footprint. Use gst_v4l2_decoder_get_render_delay() to know the
need number of buffers for downstream element.
Handle errors in case of memory allocation failures.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7544>
This is not trully supported in V4L2, but we can emulate this similar to
what other elements do. In this patch we ensure that 0/1 is supported by
encoders (caps query),and uses a default of 30fps whenever we need to
set a framerate into the driver.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7352>
It has to be included in the block duration but in GStreamer we're not
including it in the buffer duration, so it has to be added again here.
Not including it in the block duration can lead to fatal errors when playing
back with Firefox if there are more padding samples than actual samples, e.g.
> D/MediaDemuxer WebMDemuxer[7f6a0808b900] ::GetNextPacket: Padding frames larger
> than packet size, flagging the packet for error (padding: {13500000,1000000000},
> duration: {6000,1000000}, already processed: false)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7502>
By setting the earliest time to timestamp + 2 * diff there would be a difference
of 1 * diff between the current clock time and the earliest time the element
would let through in the future. If e.g. a frame is arriving 30s late at the
sink, then not just all frames up to that point would be dropped but also 30s of
frames after the current clock time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7459>
This makes sure that if upstream has different latencies that we're still
outputting buffers with increasining timestamps across the different streams
unless buffers are arriving after the latency deadline.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7500>
splitmuxsink can't possibly know how much latency it will introduce as it always
keeps one GOP around before outputting something. This breaks the latency
configuration of the pipeline and we're better off just pretending that
everything downstream of the sinkpads is not live.
Especially muxers that are based on aggregator and time out on the latency
deadline can easily misbehave otherwise as the deadline will be exceeded usually.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7499>
Adding prefer-stream-ordered-alloc property to GstCudaContext.
If stream ordered allocation buffer pool option is not configured
and this property is enabled, buffer pool will enable the stream
ordered allocation. Otherwise it will follow default behavior.
If GST_CUDA_ENABLE_STREAM_ORDERED_ALLOC env is set,
default behavior is enabling the stream ordered allocation.
Otherwise sync alloc/free method will be used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7427>
Default CUDA memory allocation will cause implicit global
synchronization. This stream ordered allocation can avoid it
since memory allocation and free operations are asynchronous
and executed in the associated cuda stream context
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7427>
While transforming the internals of waylandsink into a library, the
context type name was accidentally changed, causing an ABI break. Change
it back to its original (as used by the libgstgl), and add support for
the misnamed version as a backward compatibility measure.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7482>
The version was incorrectly encoded in the opj_config.h header with
the earlier version, which caused a compilation warning.
```
../subprojects/gst-plugins-bad/ext/openjpeg/gstopenjpegenc.c:943:5: warning: ‘bpp’ is deprecated:
Use prec instead [-Wdeprecated-declarations]
943 | comps[i].bpp = GST_VIDEO_FRAME_COMP_DEPTH (frame, i);
| ^~~~~
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7481>
When no ports are given, gst_jack_get_ports() is called to get all the
(physical) output ports but then the result is ignored, triggering the
"No physical output ports found..." error.
Instead, move the queried ports to the variable we're going to use
later.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7474>
If the UVC gadget announces multiple formats in the descriptors the uvcsink
doesn't select the actual format but let's the UVC hosts select the format.
If the GStreamer pipeline is started before a UVC host selected the format,
upstream decides on a format until the UVC host has decided. In this case, the
current format needs to be set based on the caps from the caps event to be able
to detect if the format selection by the UVC host requires a format change on
the GStreamer pipeline.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7473>
The uvcsink may be put into the READY state to start listening for UVC requests.
Therefore, the UVC host may set a streaming format before the GStreamer pipeline
is started and the uvcsink received a caps event. In this case, prev_caps will
be NULL.
If the EVENT_CAPS has not been received, skip the check if the format needs to
be changed, since the sink will be started with the format selected by the UVC
host, anyway.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7473>
Until now we were overriding pad functions forgetting about the function
data (that are set using the _full variant of the functions setters), meaning
that the data was lost and any user of that feature would get empty data when
the wrapped function were called.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7466>
Adding a property to control the number of in-flight GPU commands
(default is unlimited). Note that actual maximum number is defined
in d3d12device's direct command queue object which is 32 now,
thus total number of scheduled GPU commands cannot exceed 32.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7444>
If two (or more) rtpfunnel elements are cascaded, then only one will
realistically have information on the particular ssrc that is in use for a
particular input stream. As such, any key unit requests may never reach the
corresponding encoder.
This has been discovered by combining simulcast and BUNDLE with webrtcbin.
simulcast uses one rtpfunnel, and BUNDLE uses another rtpfunnel.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7405>
Sometimes under certain loads, VT can error out with kVTVideoEncoderMalfunctionErr or kVTVideoEncoderNotAvailableNowErr.
These have been reported to happen more often than usual if CopyProperty/SetProperty() is used close to the encode call.
Both can be worked around by restarting the encoding session.
These errors can be returned either directly from VTCompressionSessionEncodeFrame() or later in the encoding callback.
This patch handles both scenarios the same way - a session restart is be attempted on the next encode_frame() call.
If the error is returned immediately by the encode call, it's possible that some correct frames will still be given to
the output callback, but for simplicity (+ because I wasn't able to verify this scenario) let's just discard those.
In addition, this commit also simplifies the beach/drop logic in enqueue_buffer.
Related bug reports in other projects:
http://www.openradar.me/45889262https://github.com/aws/amazon-chime-sdk-ios/issues/170#issuecomment-741908622
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7173>
If state is changing from playing to paused, and rate is reset to 1
which causes seek position is valid, current code will do seek for
streams that are not seekable. So need to check whether stream is
seekable before seeking.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7441>
All accesses to it were protected either by a mutex already, or at least
used yet another mutex for gst_poll_read_control() / gst_poll_write_control().
The usage of GstPoll has to stay for backwards compatibility as it is
used to manage the (public) fd that can be used to wait for the bus to
be ready, but this switch at least simplifies the implementation a bit
and results in fewer atomic operations.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6684>
It was iterating over each field and after fixating its value was again
iterating over every field to find where to store the value.
Instead directly overwrite the value after validating it.
Also actually check that the structure is writable before modifying its fields
by using gst_structure_map_in_place() instead of gst_structure_fixate().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7420>
If the pending remote description has an invalid BUNDLE group _parse_bundle()
triggers early return from _create_answer_task(), before ret has been
initialized, so it needs to be checked before attempting to call
gst_sdp_message_copy().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7423>
The initial calculation for the precision shift was wrong and would allow for
overflows during the calculations which were not detected and lead to wrong
results.
Also add a test for a case where overflows where previously not detected and
caused a completely wrong result.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7406>
webrtcsrc first creates recvonly transceivers with codec-preferences
and expects that after applying a remote description, the
previously created transceivers are used rather than having new
transceivers created.
When pairing webrtcsink + webrtcsrc, the offer sdp from webrtcsink has a media
section with sendonly direction. In !7156, which was implemented following
RFC9429 Section 5.10, we only reuse a unassociated transceiver when applying a
remote description if the media is sendrecv or recvonly, and that caused creation
of new transceivers when applying a remote offer in webrtcsrc, thus losing
information from codec preferences like the RTP extension headers in the
previously created transceivers.
Since the change in !7156 broke existing code from webrtcsrc, relax the condition
for reusing unassociated transceivers and add a test to document this behavior which
wasn't covered by any tests before.
Fixes#3753.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7417>
```
In file included from ../subprojects/gst-plugins-good/ext/qt6/gstqsg6material.cc:31:
../subprojects/gst-plugins-good/ext/qt6/gstqsg6material.h:69:17: error: private
field 'mem_' is not used [-Werror,-Wunused-private-field]
69 | GstMemory * mem_;
| ^
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7414>
Before trying to retrieve a GMainContext from a provided
GstPlayerSignalDispatcher, check that it is actually
GstPlayerGMainContextSignalDispatcher. If not, use the
default GMainContext for dispatching signals via the adapter
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7392>
There were two main issues:
The mix matrix was not protected with the object lock
The code was mistakenly assuming that after updating the mix matrix
a reconfigure event sent upstream would be enough to cause upstream to
send caps again, and the converter was only reconstructed in ->set_caps.
That was not actually enough, as if the new matrix didn't affect the
number of input / output channels there was no reason for upstream to do
anything after getting the unchanged caps.
The fix for this was to have ->transform also recreate the converter
when needed, with the added subtlety that depending on the mix matrix
the element could be set to passthrough. This means that when setting
the mix matrix the converter also had to be recreated immediately to
check if the element had to be switched back to non-passthrough.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7363>
Don't reuse the same stats state structure across multiple
get-stats calls. Make each callback take a copy of the
non-changing fields it needs and use a local working copy
to avoid crashing.
Fixes problems with the unit test crashing sometimes for the
unit test introduced in MR !7338
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7387>
Since bins can set the context of their children elements, the set_context()
vmethod shouldn't call bus messages post methods, since it locks the parent
object, the bin, which might be already locked, leading to a deadlock.
Fixes: #3706
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7378>
When multiple streams are bundled on the same transport,
the statistics would end up incorrectly generated,
as each pad would regenerate stats for every ssrc on the
transport, overwriting previous iterations and assigning
bogus media kind and other values to the wrong ssrc.
Fix by making sure each pad only loops and generates
statistics for the one ssrc that pad is receiving / sending.
Add a unit test that the codec kind field in RTP statistics
are now generated correctly.
Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2555
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7338>
Adding a new videosink element for Windows composition API based
applications. Unlike d3d12videosink, this element will create only
DXGI swapchain by using IDXGIFactory2::CreateSwapChainForComposition()
without actual window handle, so that video scene can be composed
via Windows native composition API, such as DirectComposition.
Note that this videosink does not support GstVideoOverlay interface
because of the design.
The swapchain created by this element can be used with
* DirectComposition's IDCompositionVisual in Win32 app
* WinRT and WinUI3's UI.Composition in Win32/UWP app
* UWP and WinUI3 XAML's SwapChainPanel
See also examples in this commit which show usage of the videosink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7287>
Prevent the default webrtc test machinery from attempting to
create and set an answer when we're just testing rollback
of the offers. Add some locking / waiting to ensure the test
is complete before exiting.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7365>
Use pattern matching against expected error strings that
might include internal element names, where the names
are default assigned with incrementing integers. When running
with CK_FORK=no, there may have been previous tests that
ran in the same process and incremented the counters more
than when running in the default fork-per-test mode.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7365>
This patch addresses the issue where GStreamer would throw an error when
attempting to use bt2100-hlg colorimetry with V4L2, which is not
supported by the current V4L2 kernel. When bt2100-hlg colorimetry is set
from caps, the check for transfer (GST_VIDEO_TRANSFER_ARIB_STD_B67) is
bypassed.
The main improvement is to avoid checking the transfer value in
gst_v4l2_video_colorimetry_matches when it is
GST_VIDEO_TRANSFER_ARIB_STD_B67. This is because the transfer value in
the cinfo parameter comes from gst_v4l2_object_get_colorspace, which
converts the transfer to another value, causing a mismatch.
Since the kernel does not support GST_VIDEO_TRANSFER_ARIB_STD_B67,
gst_v4l2_object_get_colorspace cannot map it correctly from V4L2 to
GStreamer. Therefore, we ignore this check to prevent errors.
changes:
- Added a condition in gst_v4l2_video_colorimetry_matches to bypass the
transfer check when the transfer is GST_VIDEO_TRANSFER_ARIB_STD_B67.
- Ensured that the pipeline does not throw errors due to unsupported
bt2100-hlg colorimetry in V4L2.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7212>
num_backward_references > 0 means we need to cache several frames
after the current frame. But the basetransform class does not
provide any _drain() kind function, so we do not have the chance
to push out our cached frames when EOS or set caps event comes.
Rather than losing the last several frames, we should just give up
the backward reference here.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7348>
The current code forgets to push the first several frames if the forward
reference > 0. They are just cached in history array and will never be
deinterlaced and pushed.
For the first several frames, even the forward reference frames are not
enough, we still need to deinterlace them as normal and push them after that.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7348>
"adobe" in app14 marker seem not a null-terminted string. so, when
we use gst_byte_reader_get_string_utf8, more bytes will be read until
null. and "gst_byte_reader_get_uint8 (&reader, &transform)" will almost fail
to read transform
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7356>
I didn't find the behavior and purpose of streamsynchronizer documented
or intuitive. Eventually I got Edward to explain it to me, which was
very helpful. Now I'm contributing some docs so that the next person
doesn't have to figure it out by asking around and hoping for an answer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7084>
fix playback fail, when some file with length_size_minus_one == 2
According to the spec 2 cannot be a valid value, so that stream has a
bad config record. but breaking the decoding because of that, perhaps is too much.
and ffmpeg seem not check this
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7213>
librtmp allows for attaching arbitrary AMF objects to the end of the
connect packet, and this is commonly used for authenticating with
servers.
Add a new property, extra-connect-args, that mimics librtmp's behavior.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7054>
When glupload generates sink caps based on src caps after determining upload method, src
caps may only contain RGBA format.
In this case, the raw caps on the sink pad generated by glupload will only contain the
RGBA format, which will cause caps negotiation fail, because the filter caps used for
negotiation by the upstream element may only contain other formats, such as xBGR, etc.
Add the formats supported by #GstGLMemory to raw caps to ensure that caps negotiation
succeeds.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7061>
With GLES 2.0 we are forced to use CopyTextImage2D which requires
passing an internal format. With QT6 eglfs, we need to pass GL_RGB
instead, probably because of how the texture has been created. As its
hard to guess, simply fallback to GL_RGB on failure. This fixes usage
or qml6glsrc with eglfs backend, without loosing support for
semi-transparent window on other platforms.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7321>
While analyzing gst_vulkan_get_or_create_image_view_with_info() it
seems obvious that this function returns NULL, and that this should be
covered in the return annotations. However, closer inspection indicates
that this is only a precondition check when the incoming arguments are
incompatible with each other, and should not be considered as a function
that optionally returns a pointer.
Signify this by using precondition checks instead of an opencoded
if-return-NULL.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5736>
When checking for renegotiation against a local offer,
reverse the remote direction in the corresponding answer
to fix falsely not triggering on-negotiation needed when
switching (for example) from local sendrecv -> recvonly
against a peer that answered 'recvonly'.
In the other direction, when the local was the answerer,
renegotiation might trigger when it didn't need to -
whenever the local transceiver direction differs from
the intersected direction we chose. Instead what we want
is to check if the intersected direction we would now
choose differs from what was previously chosen.
This makes the behaviour in both cases match the
behaviour described in
https://www.w3.org/TR/webrtc/#dfn-check-if-negotiation-is-needed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7303>
In order to use oes-external, the qml6glsink needs a fragment shader that uses
the samplerExternalOES.
The qsb tool is not able to handle shaders that contain samplerExternalOES since
this feature is not supported by all target shading languages. The qsb tool is
able to replace a shader in the qsb file to handle this use case. Use it to
generate a shader variant that uses samplerExternalOES for OpenGL ES and select
that variant if the qml6glsink negotiated texture target oes-external.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7319>
Fixes for basic rollback (from have-local-offer or have-remote-offer to
stable). Allow having no SDP attached to the webrtc session description
in that case, and avoid all the transceiver and ICE update logic
normally applied when entering the stable signalling state
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7304>
release_frame() can be useful for manually dropping frames without posting QoS messages like finish_frame() would.
Matches the same kind of API on the decoder side of things.
Modifies the behaviour of release_frame() to make sure events from released frames are stored as 'pending'
and pushed before the next non-dropped frame. This is needed because now release_frame() can be called outside of
finish_frame(), so we would potentially just lose events and bad things would happen.
drop_frame() was also added to match the decoder API. It functions almost identically to finish_frame() without a buffer
attached to the frame, except instead of immediately pushing the frame's events, it will store them as pending.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7190>
In case when conn->input_stream is NULL and glib was built with
"glib_checks" enabled, g_pollable_input_stream_read_nonblocking()
returns -1, but does not set the "err".
The call stack:
read_bytes() ->
fill_bytes() ->
fill_raw_bytes()
The return value -1 passed up to read_bytes() and incorrectly
processed there after "error:" label.
This changes the return value to EINVAL.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7210>
Fix an inverted condition when checking if sink pad caps match
the codec-preference of an unassociated transceiver, and
fix a condition check for transceiver media kind to
avoid matching sinkpad requests where caps aren't provided
against unassociated transceivers where the caps might
not match later.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7237>
According to the ffmpeg documentation[1] the read_packet function should never
return 0. ffmpegdata_peek returns 0 when the stream is EOF causing us to fail
detecting EOF and never close the pipeline, continually spinning on more data.
ffmpeg instead wants an AVERROR_EOF code for to signal EOF.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4999>
With MR 7156, transceivers and transports are created earlier,
but for sendrecv media we could get `not-linked` errors due to
transportreceivebin not being connected to rtpbin yet when incoming
data arrives.
This condition wasn't being tested in elements_webrtcbin, but could be
reproduced in the webrtcbidirectional example. This commit now also
adds a test for this, so that this doesn't regress anymore.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7294>
A previous fix, a275e1e029, is correct but was too
permissive since it treats all un-matched NAL units the same as AU delimiters
even though some other NAL unit types can be encountered in the processing loop.
The problem this can cause is that some hardware decoders experience bad
performance when handling FD units that precede the SPS.
This change restores the original behavior for FDs so that they're ignored until
the SPS is received and it preserves the codec conformance test gains that the
fix has achieved.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7166>
glCheckFrameStatus() can fail by returning 0, and otherwise return a
status. Fix the trace to make it clear when we get an unkown status
compare to having an error, in which case we also trace the error code.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7291>
Parts may emit bus messages that want to take the splitmuxsrc
lock and prevent the downward state change. Avoid a deadlock
after a part sends an error message by taking a ref and
dropping the lock around the unprepare call
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7053>
Publish fragment-id in the messages that splitmuxsink and splitmuxsrc
send, so when they are received out of order (due to async finalization,
for example), they can still be identified / ordered correctly.
Fix a race in the splitmuxsink unit test where messages might be
received out of order
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7053>
Add a `num-lookahead` property that will 'prepare' a number of
fragments in advance of the playhead if they have been deactivated
or closed by a limited number of `num-open-fragments`. It can help
to avoid any play stalls reading the indexes or headers of the next
file from high-latency media or on resource limited machines.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7053>
Publish the playback offset for and duration into the
splitmuxsink-fragment-closed bus message as each fragment
finishes.
These can be passed to splitmuxsrc via the 'add-fragment'
signal to avoid splitmuxsrc measuring all files on startup
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7053>
Add a reasonably large default for the number of simulataneous
files to open, that won't affect users that split recordings into
a few large files, but will help prevent fd exhaustion for users
that make recordings with lots of small fragments
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7053>
When calculating the timestamp offset to apply to
media streams in a fragment, ensure that all fragments
are offset "together" to preserve alignment in cases
where there might gaps in a recording at a fragment boundary.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7053>
Add a signal that allows adding fragments with a specific offset
and duration directly to splitmuxsrc's list. By providing the
fragment's offset on the playback timeline and duration directly,
splitmuxsrc doesn't need to measure the fragment making for faster
startup times.
Add a bus message that's published when fragments are measured,
reporting the offset and duration, so they can be cached by an
application and used on future invocations.
Add examples for handling the bus message and using the 'add-fragment'
signal.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7053>
Add a property to limit the number of parts splitmux will open
simultaneously. Modify the part handling to support deactivating
and reactivating the demuxing for each part.
The default is '0', to preserve the existing behaviour of opening
all parts at the beginning.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7053>
According to https://w3c.github.io/webrtc-pc/#set-the-session-description
(steps in 4.6.10.), we should be creating and associating transceivers when
setting session descriptions.
Before this commit, webrtcbin deviated from the spec:
1. Transceivers from sink pads where created when the sink pad was
requested, but not associated after setting local description, only
when signaling is STABLE.
2. Transceivers from remote offers were not created after applying the
the remote description, only when the answer is created, and were then
only associated once signaling is STABLE.
This commit makes webrtcbin follow the spec more closely with regards to
timing of transceivers creation and association.
A unit test is added, checking that the transceivers are created and
associated after every session description is set.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7156>
If a downstream buffer pool is offered, vulkanupload checks its allocation
parameters to honor them. Only adds to usage the TRANSFER bits, which are
required to upload buffers.
Also, fail if the buffer pool cannot be configured with the current parameters.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7219>
If the stream has a special colorimetry that is not in the colorimetry
list, it will cause negotiation to fail. We should allow passing any
colorimetry, so add an extra structure without the colorimetry field.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7029>
video-info supports encoded format to have RGB color-matrix, while
v4l2object just leave the v4l2 matrix to default when mapping
GST_VIDEO_COLOR_MATRIX_RGB. It causes gst matrix changed to be
GST_VIDEO_COLOR_MATRIX_BT601 when mapping v4l2 colorimetry.
So add support for encoded format with RGB color-matrix in v4l2object.
Note that for M2M encoders, we should in theory assume that that we can
transfer this value from OUTPUT to CAPTURE queues, though its only true
if the drivers does not do CSC. For now, we don't support any RGB
codecs, but leaving a note for the future.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3952>
The V4L2_MAP_QUANTIZATION macro has been fixed to something a lot saner,
fix our replica accordingly. The new macro now simply set the quantization
to full range is the pixel formats is RGB based, or if the JPEG
colorspace is used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3952>