We don't support stopping RTP receivers currently so let's not consider
them all stopped all the time. This fixes some of the ICE/DTLS state
change handling and specifically fixes the ICE gathering state.
Previously the ICE gathering state was immediately going from NEW to
COMPLETE because it considered all transceivers stopped and as such all
activate transceivers were finished gathering ICE candidates.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1126
Before this change decoder used the oldest frame in the list to pair it
with the decoded surface. This only works when there's a perfect stream
like HEADERS,SYNCPOINT,DELTA...
When playing RTSP streams we can get imperfect streams like HEADERS,
DELTA,SYNCPOINT,DELTA... In this case decoder drops the frames
between HEADERS and SYNCPOINT which leads into using wrong PTS on
the output frames.
With this change we inject the input PTS into the bitstream and use it
to align the internal frame list with the actually decoded position.
Fixes playback with:
```
gst-launch-1.0 rtspsrc location=... latency=0 drop-on-latency=1 ! ...
```
Hard-coded 16x16 resolution is likely to differ from the device's support
in most cases. If we can use NV_ENC_CAPS_WIDTH_MIN and NV_ENC_CAPS_HEIGHT_MIN,
update pad template with returned value.
need_reconfig is added to allow sub class requires a reconfig when
the input frame or the MetaData (e.g. GstVideoRegionOfInterestMeta)
attached to the input frame is changed.
`pipe()` isn't used since 15927b6511,
and `socketpair()` from `#include <sys/socket.h>` is used only in the
examples. In practice, you can use probably also use anything that
allows you to create fd pairs, such as named pipes or anonymous pipes.
We use the cross-platform GstPollFD API in the plugin.
Use consistent memory layout between dxva and other shader use case.
For example, use DXGI_FORMAT_NV12 texture format instead of
two textures with DXGI_FORMAT_R8_UNORM and DXGI_FORMAT_R8G8_UNORM.
This reverts commit ddd13fc7c0
Dynamic usage can reduce the number of copy per frame but make
things complicated and the benefit seems to not significant.
Also since we don't provide _map() method for the dynamic usage,
application cannot read buffers which make "last-sample" property
unusable in case of d3d11videosink.
xdg_shell fullscreen mode doesn't work for committing
xdg_surface without configure acknowledgement.
In addition, we can't set different surface setting from
acknowledged config in this mode.
Some raw h264 encoded files trigger the assignment of wrong PTS to buffers
when some SEI data is provided. This change prevents it to happen.
Also ensure this behavior is being tested.
We might have some old timecodes that are in the future now and have to
drop those to make sure that our queue is correctly ordered and we don't
have multiple timecodes for the same running time.
Directly read them out of the decoder as soon as we passed audio and
then store them in a queue that we handle internally together with their
timestamps. This cleans up memory management and gives us proper control
over the queue instead of guessing how the queue inside the LTC decoder
actually works and when it overflows.
And also introduce 6 instead of 2 frames of latency compared to the LTC
audio input as that seems to be an upper bound for how much the LTC
library is lagging behind.
AES128 support was added since nettle version 3.0
../subprojects/gst-plugins-bad/ext/hls/gsthlsdemux.h:110:10: error: field ‘ctx’ has incomplete type
struct CBC_CTX (struct aes128_ctx, AES_BLOCK_SIZE) aes_ctx;
Commit a1584b6 caused big performance drop if the downstream element
is not a msdk element because it is very slow to read data from video
memory directly.
This reverts commit a1584b6f99.
If 8 bit are required by the device/mode then it will be converted internally
by the SDK, but the SDK won't automatically convert from 8 to 10 bit. As
such, always use 10 bit VANC.
Some devices require configuring also a 10 bit video format when using
10 bit VANC is required but those would fail regardless and the
application would have to configure the correct video format.
With newer versions of the SDK this information can be retrieved via the
BMDDeckLinkVANCRequires10BitYUVVideoFrames flag but we don't use a new
enough SDK version yet to extract this information.
As the H265/H264 bitstream can support multiple slices,
mastering_display_info_state and content_light_level_state
should be changed only on first slice segment.
Fix#1152