need_reconfig is added to allow sub class requires a reconfig when
the input frame or the MetaData (e.g. GstVideoRegionOfInterestMeta)
attached to the input frame is changed.
`pipe()` isn't used since 15927b6511,
and `socketpair()` from `#include <sys/socket.h>` is used only in the
examples. In practice, you can use probably also use anything that
allows you to create fd pairs, such as named pipes or anonymous pipes.
We use the cross-platform GstPollFD API in the plugin.
Use consistent memory layout between dxva and other shader use case.
For example, use DXGI_FORMAT_NV12 texture format instead of
two textures with DXGI_FORMAT_R8_UNORM and DXGI_FORMAT_R8G8_UNORM.
This reverts commit ddd13fc7c0
Dynamic usage can reduce the number of copy per frame but make
things complicated and the benefit seems to not significant.
Also since we don't provide _map() method for the dynamic usage,
application cannot read buffers which make "last-sample" property
unusable in case of d3d11videosink.
xdg_shell fullscreen mode doesn't work for committing
xdg_surface without configure acknowledgement.
In addition, we can't set different surface setting from
acknowledged config in this mode.
Some raw h264 encoded files trigger the assignment of wrong PTS to buffers
when some SEI data is provided. This change prevents it to happen.
Also ensure this behavior is being tested.
We might have some old timecodes that are in the future now and have to
drop those to make sure that our queue is correctly ordered and we don't
have multiple timecodes for the same running time.
Directly read them out of the decoder as soon as we passed audio and
then store them in a queue that we handle internally together with their
timestamps. This cleans up memory management and gives us proper control
over the queue instead of guessing how the queue inside the LTC decoder
actually works and when it overflows.
And also introduce 6 instead of 2 frames of latency compared to the LTC
audio input as that seems to be an upper bound for how much the LTC
library is lagging behind.
AES128 support was added since nettle version 3.0
../subprojects/gst-plugins-bad/ext/hls/gsthlsdemux.h:110:10: error: field ‘ctx’ has incomplete type
struct CBC_CTX (struct aes128_ctx, AES_BLOCK_SIZE) aes_ctx;
Commit a1584b6 caused big performance drop if the downstream element
is not a msdk element because it is very slow to read data from video
memory directly.
This reverts commit a1584b6f99.
If 8 bit are required by the device/mode then it will be converted internally
by the SDK, but the SDK won't automatically convert from 8 to 10 bit. As
such, always use 10 bit VANC.
Some devices require configuring also a 10 bit video format when using
10 bit VANC is required but those would fail regardless and the
application would have to configure the correct video format.
With newer versions of the SDK this information can be retrieved via the
BMDDeckLinkVANCRequires10BitYUVVideoFrames flag but we don't use a new
enough SDK version yet to extract this information.
As the H265/H264 bitstream can support multiple slices,
mastering_display_info_state and content_light_level_state
should be changed only on first slice segment.
Fix#1152
Although the target platform of D3D11 decoding API are both desktop and UWP app,
DXVA header is blocked by "WINAPI_FAMILY_PARTITION(WINAPI_PARTITION_DESKTOP)"
which is meaning that that's only for desktop app.
To workaround this inconsistent annoyingness, we need to define WINAPI_PARTITION_DESKTOP
regardless of target WinAPI partition.
The codec profile should be consistent with the frame fourcc code, this
fixes pipeline below:
gst-launch-1.0 videotestsrc ! \
video/x-raw,width=320,height=240,format=P010_10LE ! msdkvp9enc ! \
fakesink