Because the size of texture array cannot be updated dynamically,
allocator should block the allocation request. This cannot be
done at buffer pool side if this d3d11 memory is shared among
multiple buffer objects. Note that setting NO_SHARE flag to
d3d11 memory is very inefficient. It would cause most likey
copy of the d3d11 texture.
...for color space conversion if available
ID3D11VideoProcessor is equivalent to DXVA-HD video processor
which might use specialized blocks for video processing
instead of general GPU resource. In addition to that feature,
we need to use this API for color space conversion of DXVA2 decoder
output memory, because any d3d11 texture arrays that were
created with D3D11_BIND_DECODER cannot be used for shader resource.
This is prework for d3d11decoder zero-copy rendering and also
for conditional HDR tone-map support.
Note that some Intel platform is known to support tone-mapping
at the driver level using this API on Windows 10.
We've been using NvEncodeAPICreateInstance method to find the supported API
version, but that seems to be insufficient since there is a case
where plugin failed in opening encoding session even if NvEncodeAPICreateInstance
succeeded. Asking driver about the version would be the most certain way.
The receive bin should block buffers from reaching dtlsdec before
the dtls connection has started.
While there was code to block its sinkpads until receive_state
was different from BLOCK, nothing was ever setting it to BLOCK
in the first place. This commit corrects this by setting the
initial state to BLOCK, directly in the constructor.
In addition, now that blocking is effective, we want to only
block buffers and buffer lists, as that's what might trigger
errors, we want to still let events and queries go through,
not doing so causes immediate deadlocks when linking the
bin.
And free data with the correct free() function in the receive callback
by passing it to gst_buffer_new_wrapped_full() instead of
gst_buffer_new_wrapped().
alignment works like in mpegtsmux, joining several MpegTS packets into
one buffer. Default value of 0 joins as many as possible for each
incoming buffer, to optimise CPU usage.
When a pipeline is stopped (actually when the waylandsink element
state changes from PAUSED to READY) the video surface is cleared, but
the opaque black surface behind is not. Fix this by actually clearing
both surfaces.
User is seeing corrupted display when running `videotestsrc !
video/x-raw,format=NV12,width=xxx,height=xxx ! msdkh265enc ! msdkh265dec
! glimagesink` with changed frame size, e.g. from 1920x1080 to 1920x240
The root cause is a same dmabuf fd is used for frames with
different size, which causes some unexpected result. This patch requires
cached response is used for frames with same size only for DMABuf, so a
dmabuf fd can't be used for frames with different size any more.
Don't specify the resolution of backbuffer. Then dxgi will let us know the
actual client area. When upstream resolution is chagned, updating the size
of backbuffer without the consideration for client size would cause mismatch
between them.
Setting the CUVID_PKT_DISCONTINUITY implies clearing any past information
about the stream in the decoder. The GStreamer discont flag is used for
discontinuity caused by a seek, for first buffer and if a buffer was
dropped. In the first two cases, the parsers and demuxers should ensure we
start from a synchronization point, so it's unlikely that delta will be
matched against the wrong state.
For packet lost, the discontinuity flag will prevent the decoder from doing
any concealment, with a result that ca be much worst visually, or freeze the
playback until an IDR is met. It's better to let the decoder handle that for
us.
Removing this flag, also workaround a but in NVidia parser that makes it
ignore our ENDOFFRAME flag and increase the latency by one frame.
This sets the CUVID_PKT_ENDOFPICTURE flag in order to inform the decoder that
we have a complete picture. This should remove one frame latency otherwise
introduce by NVidia parser.
If we have no DTS but a PTS then this means both are the same, and we
should update the last_ts with the PTS. Only if both are unknown then we
don't know the current position and should not update it at all.
Previously we would always update the last_ts to GST_CLOCK_TIME_NONE if
the DTS is unknown, which caused the position to jump around and to
cause spurious gap events to be sent.
This patch fixed compiler warning below:
[1/4] Compiling C object 'sys/msdk/dc44ea0@@gstmsdk@sha/gstmsdkvpp.c.o'.
../../gst-plugins-bad/sys/msdk/gstmsdkvpp.c: In function
‘gst_msdkvpp_context_prepare’:
../../gst-plugins-bad/sys/msdk/gstmsdkvpp.c:214:7: warning: suggest
parentheses around operand of ‘!’ or change ‘&’ to ‘&&’ or ‘!’ to ‘~’
[-Wparentheses]
We need the streams' pt maps updated before requesting pads
on rtpbin, because this is what will trigger the requesting
of FEC encoders, and our handler for this request looks for
the payload types in the relevant stream's pt map.
Fixes#1187
Instead of doing it on each packet and doing it based on the distance to
the previous SCR instead of based on the DTS.
Previously we would send gap events for audio all the time if the SCR
distance was 400ms because the threshold for audio is 300ms and by only
ever updating the position when the SCR updates we would always be 100ms
above the threshold and send needless gap events.
This fixes audio glitches on various files caused by gap events.
Our context is non-persistent, and we propagate it throughout the
pipeline. This means that if we try to reuse any gstmsdk element by
removing it from the pipeline and then re-adding it, we'll clone the
mfxSession and create a new gstmsdk context as a child of the old one
inside `gst_msdk_context_new_with_parent()`.
Normally this only allocates a few KB inside the driver, but on
Windows it seems to allocate tens of MBs which leads to linearly
increasing memory usage for each PLAYING->NULL->PLAYING state cycle
for the process. The contexts will only be freed when the pipeline
itself goes to `NULL`, which would defeat the purpose of dynamic
pipelines.
Essentially, we need to optimize the case in which the element is
removed from the pipeline and re-added and the same context is re-set
on it. To detect that case, we set the context on `old_context`, and
compare it to the new one when preparing the context. If they're the
same, we don't need to do anything.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/946