When emitting ICE candidates, also merge them to the local and
pending description so they show up in the SDP if those are
retrieved from the current-local-description and
pending-local-description properties.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/676
gst_video_frame_copy will copy input frame to stating texture
of fallback frame. Then, we need to map fallback texture with GST_MAP_D3D11
flag to upload the staging texture to render texture. Otherwise
the render texture wouldn't be updated.
Source texture (decoder view) might be larger than destination (staging) texture.
In that case, D3D11_BOX structure should be passed to CopySubresourceRegion method
in order to specify the exact target area.
- Do not send ABORTs for unexpected packets are as response to INIT
- Enable interleaving of messages of different streams
- Configure 1MB send and receive buffer for the socket
- Enable SCTP_SEND_FAILED_EVENT and SCTP_PARTIAL_DELIVERY_EVENT events
- Set SCTP_REUSE_PORT configuration
- Set SCTP_EXPLICIT_EOR and the corresponding send flag. We probably
want to split packets to a maximum size later and only set the flag
on the last packet. Firefox uses 0x4000 as maximum size here.
- Enable SCTP_ENABLE_CHANGE_ASSOC_REQ
- Disable PMTUD and set an maximum initial MTU of 1200
Calling bind() only sets up some data structures and calling connect()
only produces one packet before it returns. That packet is stored in a
queue that is asynchronously forwarded by the encoder's source pad loop,
so not much is happening there either. Especially no waiting is
happening here and no forwarding of data to other elements.
This fixes a race condition during connection setup: the connection
would immediately fail if we pass a packet from the peer to the socket
before bind() and connect() have returned.
This can't happen anymore as bind() and connect() have returned already
before both elements reach the PAUSED state, and in webrtcbin there is
an additional blocking pad probe before the decoder that does not let
any data pass through before that anyway.
The library is thread-safe by itself and potentially calls back into our
code, not only from the same thread but also from other threads. This
can easily lead to deadlocks if we try to hold our mutex on both sides.
DXGI_SWAP_EFFECT_DISCARD cannot be used with dirty rect drawing feature
of IDXGISwapChain1::Present().
Note that IDXGISwapChain1 interface is available on Platform Update for Windows 7
and DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL is also the case.
Use resolution specified in caps for input_rect instead of
passed width and height value. The width and height might be modified
ones by d3d11videosink, then frame resolution might be different.
* Move decoding process to handle_frame
* Remove GstVideoDecoder::parse implementation
* Clarify flush/drain/finish usage
In forward playback case, have_frame() call will be followed by
handle_frame() but reverse playback is not the case.
To ensure GstVideoCodecFrame, the decoding process should be placed inside
of handle_frame(), instead of parse().
Since we don't support alignment=nal, the parse() implementation is not worth.
In order to fix broken reverse playback, let's remove the parse()
implementation and revisit it when adding alignment=nal support.
... and remove unused start, stop method from subclass.
Current implementation does not require subclass specific behavior
for the handle_frame() method.
Actually our buffer pool size and the number of backbuffer are
independent. In case of reverse playback, upstream might request
a lot of buffers (up to GOP size).
The class data with the caps in it will be leaked if the element is
registered but never instantiated. There is no way around this. Mark
the caps as such so that the leaks tracer does not warn about it.
This is the same as pad template caps getting leaked, which are also
marked as may-be-leaked. These objects are initialized exactly once,
and are 'global' data.
Starting from WPEBackend-FDO 1.6.x, software rendering support is available.
This features allows wpesrc to be used on machines without GPU, and/or for
testing purpose. To enable it, set the `LIBGL_ALWAYS_SOFTWARE=true` environment
variable and make sure `video/x-raw, format=BGRA` caps are negotiated by the
wpesrc element.
Otherwise it can happen that e.g. the stream-start event is tried to be
sent as part of pushing the first buffer. Downstream might not be in
PAUSED/PLAYING yet, so the event is rejected with GST_FLOW_FLUSHING and
because it's an event would not cause the blocking pad probe to trigger
first. This would then return GST_FLOW_FLUSHING for the buffer and shut
down all of upstream.
To solve this we return GST_PAD_PROBE_DROP for all events. In case of
sticky events they would be resent again later once we unblocked after
blocking on the buffer and everything works fine.
Don't handle events specifically in sink pad blocking pad probes as here
downstream is not linked yet and we are actually waiting for the
following CAPS event before unblocking can happen.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1172
Without this it might happen that received data from the DTLS transport
is already passed to sctpdec before its state was set to PLAYING. This
would cause the data to be dropped, GST_FLOW_FLUSHING to be returned and
the whole DTLS transport to shut down.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1172
among other things.
Using a GCond can easily lead to deadlocks and only duplicates the
waiting code from gstpad.c in the best case.
In this case it actually could lead to a deadlock if both RTP and RTCP
were waiting. Only one of them would be woken up because g_cond_signal()
was used instead of g_cond_broadcast().
video/x-vp9 is required in the src pad, however the output includes a
IVF header, which makes the pipeline below doesn't work
gst-launch-1.0 videotestsrc ! msdkvp9enc ! msdkvp9dec ! fakesink
Since mfx 1.26, the VP9 encoder supports bitstream without IVF header,
so in this patch, the mfx version is checked and msdkvp9enc is enabled
only if mfx 1.26+ is available
Android 25 added support for i-frame-interval to be a floating
point value. Store the property as a float and use the newer
version when it's available.
Android 19 added an API for dynamically changing the bitrate in a running
codec.
Also make it so that even when not update-able at runtime, parameters will at least
be stored so that they take effect the next the codec is restarted.
Change how content-length is set for HTTP POST headers, letting curl set
the header (given the content-length) instead of manually writing it.
This enables curl to know the content-length of the data.
In curl 7.66, if curl does not know the content-length (e.g. when
manually writing the header) curl will use Transfer-Encoding: chunked,
which might not be desired.
Use a double instead of a plain float for intermediary
property values, so we have enough bits to store INT_MAX
and it doesn't get rounded and wrapped to -1 when cast
back to a 32-bit integer.
Fixes criticals like
g_param_spec_int: assertion 'default_value >= minimum && default_value <= maximum' failed
when loading LADSPA plugins from the Linux Studio Plugins
Project (http://lsp-plug.in) in GStreamer.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1194
The avtpsink element is expected to transmit AVTPDUs at specific times,
according to GstBuffer timestamps. Currently, the transmission time is
controlled in software via the rendering synchronization mechanism
provided by GstBaseSink class. However, that mechanism may not cope with
some AVB use-cases such as Class A streams, where AVTPDUs are expected
to be transmitted at every 125 us. Thus, this patch introduces avtpsink
own mechanism which leverages the socket transmission scheduling
infrastructure introduced in Linux kernel 4.19. When supported by the
NIC, the transmission scheduling is offloaded to the hardware, improving
transmission time accuracy considerably.
To illustrate that, a before-after experiment was carried out. The
experimental setup consisted in 2 PCs with Intel i210 card connected
back-to-back running an up-to-date Archlinux with kernel 5.3.1. In one
host gst-launch-1.0 was used to generate a 2-minute Class A stream while
the other host captured the packets. The metric under evaluation is the
transmission interval and it is measured by checking the 'time_delta'
information from ethernet frames captured at the receiving side.
The table below shows the outcome for a 48 kHz, 16-bit sample, stereo
audio stream. The unit is nanoseconds.
| Mean | Stdev | Min | Max | Range |
-------+--------+---------+---------+---------+---------+
Before | 125000 │ 2401 │ 110056 │ 288432 │ 178376 |
After | 125000 │ 18 │ 124943 │ 125055 │ 112 |
Before this patch, the transmission interval mean is equal to the
optimal value (Class A stream -> 125 us interval), and it is kept the
same after the patch. The dispersion measurements, however, had
improved considerably, meaning the system is now consistently
transmitting AVTPDUs at the correct time.
Finally, the socket transmission scheduling infrastructure requires the
system clock to be synchronized with PTP clock so this patches modifies
the AVTP plugin documentation to cover how to achieve that.
This patch refactors gst_avtp_sink_start() by moving all socket
initialization code to its own function. This change prepares the code
to the next patch which will introduce avtpsink's own rendering
synchronization mechanism.
Current avtpsink code opens the AF_PACKET socket with SOCK_NONBLOCK
option. However, we actually want sendto() to block in case there isn't
available space in socket buffer.
This patch refactors both avtpsink and avtpsrc code so we use the
if_nametoindex() helper instead of building a request and issuing an
ioctl to get the if_index.
d3d11window holds one buffer to redraw client area per resize event.
When the input format is being changed, this buffer should be cleared
to avoid mismatch beween newly configured shader/videoprocessor and
the format of previously cached buffer.
Because the size of texture array cannot be updated dynamically,
allocator should block the allocation request. This cannot be
done at buffer pool side if this d3d11 memory is shared among
multiple buffer objects. Note that setting NO_SHARE flag to
d3d11 memory is very inefficient. It would cause most likey
copy of the d3d11 texture.
...for color space conversion if available
ID3D11VideoProcessor is equivalent to DXVA-HD video processor
which might use specialized blocks for video processing
instead of general GPU resource. In addition to that feature,
we need to use this API for color space conversion of DXVA2 decoder
output memory, because any d3d11 texture arrays that were
created with D3D11_BIND_DECODER cannot be used for shader resource.
This is prework for d3d11decoder zero-copy rendering and also
for conditional HDR tone-map support.
Note that some Intel platform is known to support tone-mapping
at the driver level using this API on Windows 10.