Prevent the default webrtc test machinery from attempting to
create and set an answer when we're just testing rollback
of the offers. Add some locking / waiting to ensure the test
is complete before exiting.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7365>
Use pattern matching against expected error strings that
might include internal element names, where the names
are default assigned with incrementing integers. When running
with CK_FORK=no, there may have been previous tests that
ran in the same process and incremented the counters more
than when running in the default fork-per-test mode.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7365>
fix playback fail, when some file with length_size_minus_one == 2
According to the spec 2 cannot be a valid value, so that stream has a
bad config record. but breaking the decoding because of that, perhaps is too much.
and ffmpeg seem not check this
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7213>
When checking for renegotiation against a local offer,
reverse the remote direction in the corresponding answer
to fix falsely not triggering on-negotiation needed when
switching (for example) from local sendrecv -> recvonly
against a peer that answered 'recvonly'.
In the other direction, when the local was the answerer,
renegotiation might trigger when it didn't need to -
whenever the local transceiver direction differs from
the intersected direction we chose. Instead what we want
is to check if the intersected direction we would now
choose differs from what was previously chosen.
This makes the behaviour in both cases match the
behaviour described in
https://www.w3.org/TR/webrtc/#dfn-check-if-negotiation-is-needed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7303>
Fixes for basic rollback (from have-local-offer or have-remote-offer to
stable). Allow having no SDP attached to the webrtc session description
in that case, and avoid all the transceiver and ICE update logic
normally applied when entering the stable signalling state
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7304>
With MR 7156, transceivers and transports are created earlier,
but for sendrecv media we could get `not-linked` errors due to
transportreceivebin not being connected to rtpbin yet when incoming
data arrives.
This condition wasn't being tested in elements_webrtcbin, but could be
reproduced in the webrtcbidirectional example. This commit now also
adds a test for this, so that this doesn't regress anymore.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7294>
According to https://w3c.github.io/webrtc-pc/#set-the-session-description
(steps in 4.6.10.), we should be creating and associating transceivers when
setting session descriptions.
Before this commit, webrtcbin deviated from the spec:
1. Transceivers from sink pads where created when the sink pad was
requested, but not associated after setting local description, only
when signaling is STABLE.
2. Transceivers from remote offers were not created after applying the
the remote description, only when the answer is created, and were then
only associated once signaling is STABLE.
This commit makes webrtcbin follow the spec more closely with regards to
timing of transceivers creation and association.
A unit test is added, checking that the transceivers are created and
associated after every session description is set.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7156>
Instead of dragging the last destination pipeline stage as current barrier
source pipeline stage (which isn't a valid semantic) this patch adds a parameter
to gst_vulkan_operation_add_frame_barrier() to set the source pipeline stage to
define the barrier.
The previous logic brought problems particularly with queue transfers, when the
new queue doesn't support the stage set during a previous operation in a
different queue.
Now the operation API is closer to Vulkan semantics.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7165>
since the encoded output is changing based on version
it does not make sense to check the output bitstream with a fixed
bytearray since the version in the target might vary. So sticking
to checking the number of output buffers and encoded frame size
similar to the other tests
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7141>
A large refactoring commit for adding features and improve performance
* Reuse internal converter and overlay compositor:
Converter can be reused as long as input and display formats are not
changed. Also overlay compositor reconstruction is required only if
display format is changed
* Don't wait for full GPU flush on resize or close:
D3D12 swapchain requires GPU idle in order to resize backbuffer.
Thus CPU side waiting is required for swapchain related commands
to be finished. However, don't need to wait for full GPU flushing.
* Support multiple sink on a single external window
Keep installed subclass window procedure even if there's no associated
our internal HWND. This will make window procedure hooking less racy.
Then parent HWND's message will be transferred to our internal HWNDs
if needed.
* Adding support for window handle update
Application can change target HWND even when videosink is playing or
paused state. So, users can call gst_video_overlay_set_window_handle()
against d3d12videosink anytime. The videosink will be able to update
internal state and setup resource upon requested.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7013>
Conceptually identical to the present signal of d3d11videosink.
This signal will be emitted with current render target
(i.e., swapchain backbuffer) and command queue. Signal handler
can record GPU commands for an overlay image or to blend
an image to the render target.
In addition to d3d12 resources, videosink will send
d3d11 and d2d resources depending on "overlay-mode"
property, so that signal handler can render by using
preferred/required DirectX API.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6838>
From the spec (chapter 34, v1.3.283):
````
UNORM: the components are unsigned normalized values in the range [0, 1]
SRGB: the R, G and B components are unsigned normalized value that represent
values using sRGB nonlinear encoding, while the A component (if one
exists) is a regular unsigned normalized value
```
The difference is the storage encoding, the first one is aimed for image
transfers, while the second is for shaders, mostly in the swapchain stage in the
pipeline, and it's done automatically if needed [1].
As far as I have checked, other frameworks (FFmpeg, GTK+), when import or export
images from/to Vulkan, use exclusively UNORM formats, while SRGB formats are
ignored.
My conclusion is that Vulkan formats are related on how bits are stored in
memory rather their transfer functions (colorimetry).
This patch does two interrelated changes:
1. It swaps certain color format maps to try first, in both
gst_vulkan_format_from_video_info() and gst_vulkan_format_from_video_info_2(),
the UNORM formats, when comparing its usage, and later check for SRGB.
2. It removes the code that check for colorimetry in
gst_vulkan_format_from_video_info_2(), since it not storage related.
1. https://community.khronos.org/t/noob-difference-between-unorm-and-srgb/106132/7
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6797>
Add pitch tests with different forward and backward playback rates.
Those tests depend on the libSoundTouch version to validate the buffers
checksums. The actual version uses libSoundTouch 2.3.2, use the
`--force-fallback-for=soundtouch` meson option to build using the same
version.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6247>
Since we don't want to expose video decoding API outside of GStreamer, the
header is removed from installation and both source files are renamed as
-private.
The header must remain in gst-libs because is referred by GstVulkanQueue,
which's the decoder factory.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6723>
Adding new property in order to notify users of device removed status.
Once device removed status is detected, application should release
all ID3D12Device objects corresponding to the adapter, including
GstD3D12Device object. Otherwise D3D12CreateDevice() call for the
adapter will fail.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6699>
It seems that when D3D11CreateDevice collides in time
with other D3D11 calls, in particular the proccess of
creating a shader, it can corrupt the memory in the driver.
D3D11 spec doesn't seem to require any thread safety from
D3D11CreateDevice. Following MSDN, it is supposed to be called
in the beginning of the proccess, while GStreamer calls it with each
new pipeline.
Such crashes in the driver were frequently reproducing on the
Intel UHD 630 machine.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6686>
We suspect that it's not thread safe to just create and
destroy the device from any thread, particularly because
of D3D11CreateDevice, that is not documented as thread-safe.
While D3D11CreateDevice is usually protected from outside
by the gst_d3d11_ensure_element_data, it still can cross
with the Release() method of another device.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6686>
../subprojects/gst-plugins-bad/tests/check/libs/gstlibscpp.cc:41:
fatal error: gst/mpegts/gstmpegts-enumtypes.h: No such file or directory
Could only pass the needed deps to the libscpp test, but gets
messier to maintain, so let's at it for consistency.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6643>
There was a potential busy loop occuring because when we were taking
data from the internal ccbuffer, we were not resetting which field had
written data. This would mean that the next time data was retrieved
from ccbuffer, it was always from field 0 and never from field 1.
This only affects usage of cc_buffer_take_separated() which is only used
by cdp->raw cea608.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6423>
Cea608 (valid) padding removal is available on the input side of ccconverter
or configurable on cccombiner. cccombiner can now configure whether
valid or invalid cea608 padding is used and for valid padding, how long
after valid non-padding to keep sending valid padding.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6300>
The pool currently defaults to performing a layout transition to
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, with some special exceptions for
video usages. This may not be a legal transition depending on the usage.
Provide an API to explicitly control the initial image layout.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5881>
Interlaced MJPEG is a big hack. Most of the streams we've found are from old
AVID tools. There are two methods to detect interlaced stream: the container
offers a height bigger (or double) than the image's height in SOF. The other
is from a APP0 marker.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5838>
Serialize every GstMeta that supports serialization into the NEW_BUFFER
payload. This is especially important for GstVideoMeta in the case of
multiplanar buffers, or if stride!=width.
Sponsored-by: Netflix Inc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5355>
- GstAnalyticRelationMeta is a base class for analytics
meta. It's able to store analytics results (GstAnalyticRelatableMtd)
and describe the relation between each analysis results.
- GstAnalysisRelationMeta also contain an algorithm able to explore
analysis results relation using a bfs.
- Relation(edge) between analysis results (vertice) are stored in an adjacency-matrix
that allow to quickly identify if two analysis results are related and by
which relation they related. It also work for indirect relation
and can provide the path of analysis results by which two
analysis results are related.
- One allocation per buffer to store analysis results. Here we rely on
the application to guess how much space will be required to store all
analysis results. This is something that could be improved
significantly but it's a starting point.
- Define common analysis results, classification, object-detection,
tracking that are subclass of GstAnalyticRelatableMtd. The also
provide exemple of how to extend GstAnalyticRelatableMtd to have them
benefit for the mechanim to express relation with other analysis
results.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4962>
A payload of 0x80 0x80 means that it's padding. It's not a good idea to
throw this away though, because of the cc_valid field.
According to CEA 10-B section 25.2.1, if cc_valid is zero, the run-in
clock and start bit should not be generated. In practice, this means
that any closed captions will be erased and the end-user TV will show
that captions are not available for this stream. This might have
undesired consequences, e.g. we were just showing a long line of
captions and we disable it before the user has had time to read it, or
you can't enable closed captions during silence/music intervals.
We cannot reliably detect whether there's a currently-silent closed
caption stream or just nothing, but we have this information coming from
upstream, so we can at least not discard it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5508>
This pair of elements, inspired from shmsink/shmsrc, send unix file
descriptors (e.g. memfd, dmabuf) from one sink to multiple source
elements in other processes.
The unixfdsink proposes a memfd/shm allocator, which causes for example
videotestsrc to write directly into memories that can be transfered to
other processes without copying.
Sponsored-by: Netflix Inc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5328>
Sending an EOS event is actually really bad because rtpbin doesn't
handle that very well. It was only being used as a way to notify
webrtcbin to check if re-negotiation is needed.
We don't need that anymore, since changing the direction is enough to
notify webrtcbin to check for re-negotiation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5045>
There's no reason to release GstMemory manually at all.
If we do release GstMemory, corresponding GstBuffer will be
discarded by GstBufferPool baseclass because the size is changed
to zero.
Actual cause of heavy CPU usage in case of fixed-size pool
(i.e., decoder output buffer pool) and if we remove GstMemory from
GstBuffer is that GstBufferPool baseclass is doing busy wait in acquire_buffer()
for some reason. That needs to be investigated though, discarding
and re-alloc every GstBuffer is not ideal already.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4935>