ICE candidates can be added to the array directly from the application
or from the webrtc main loop. Rename it to make it clear that it's
holding remote ICE candidates from the peer, and protect it with a
new mutex
VP9 codec allows resizing reference frame by spec. Handling this case
is a bit tricky especially when the resizing happens on non-keyframe,
because pre-allocated decoder textures (i.e., dpb) have negotiated
resolution and to change resolution meanwhile decoding on non-keyframe,
each texture might need to be re-created, copied to new dpb somehow,
and re-negotiated with downstream.
Due to the complicated requirement of negotiation driven
resizing handling, this commit adds shader into d3d11decoder object
to resize only corresponding frames. Note that if the resolution change
is detected on keyframe, decoder will re-negotiate with downstream.
Not only any textures for decoder output view, any destination texture
which would be copied from decoder output texture need to be aligned too.
Otherwise driver sometimes crashed/hung (not sure why).
Resolution of NV12, P010, and P016 formats must be multiple of two.
Otherwise texture cannot be created. Instead of doing this alignment
per API consumer side, do this in buffer pool for simplicity.
That's the value of NumDeltaPocs[RefRpsIdx] and we might be able to derive
the value from given sps and slice header.
Because well known hardware implementations refer to the value, however,
storing the value makes things easier.
Following is the list of hardware implementations
* DXVA2: ucNumDeltaPocsOfRefRpsIdx
* NVDEC/VDPAU: NumDeltaPocsOfRefRpsIdx
See C.5.2.2 Output and removal of pictures from the DPB.
If the number of pictures in the DPB is greater than or equal to
sps_max_dec_pic_buffering_minus1[HighestTid] + 1, then the picture
should be outputted.
Now that the system_frame_number is saved on the pictures we can use
gst_video_decoder_get_frame() helper instead of getting the full list
and looping over it.
On new_segment, the decoder is expected to negotiate. The decoder may want to
pre-allocate the needed buffers. Pass the max_dpb_size as this is needed to
determin how many buffers should be allocated.
This introduce a library which contains a set of base classes which
handles the parsing and the state tracking for the purpose of decoding
different CODECs. Currently H264, H265 and VP9 are supported. These
bases classes are used to decode with low level decoding API like DXVA,
NVDEC, VDPAU, VAAPI and V4L2 State Less decoders. The new library is
named gstreamer-codecs-1.0 / libgstcodecs.
- Display caps of the pad we actually tried to link.
- Use the template caps as the filter is likely to not have any caps set
yet.
- Log pad name as well.
The approach is quite simple and doesn't take all use cases into account,
it only implements support when we are using the internal timecode we
create ourself.
Also the way we compute the sought frame count is naive, but it works
for simple cases.
As per discussion in the bug, remove the drop state from transportreceivebin.
Dropping data is necessary, but for bundled config, needs to happen
further downstream after mixed flows have been separated.
Also support switching back to BLOCK from PASS state.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1206
Add a missing dependency to wl_client_dep for the wayland build. Some distros
have the wayland-client headers not installed in /usr/include (which is perfectly
valid, the pkg-config .pc file gives the right feedback).
This commit moves parsing code for superframe and frame header into
handle_frame() method, and removes parse() implementation from vp9decoder
baseclass.
The combination of
- multiple frames are packed in a given input buffer (i.e., superframe)
- reverse playback
seems to be complicated and also it doesn't work as intended in some case
The most common audio sample rate in AV streams is 48kHz, and the most
common device output sample rate is 48kHz. This allows handing of 48kHz
input streams without resampling.
Remove comments about avoiding the use of 48kHz.
This change is needed to support 2K DCI video modes.
Version 10.8 of the Decklink SDK supported DCI video modes for output
only. This updated version drops that restriction.
The current latest version of the Decklink SDK is 11.5, however
the gstreamer decklink plugin is not compatible with API changes
introduced in version 11 of the SDK. Therefore I have opted to upgrade
to the latest 10.x version instead.
Fixes dependency issues:
FAILED: subprojects/gst-plugins-bad/ext/dash/8bd0b95@@gstdash@sha/gstdashsink.c.obj
cl @subprojects/gst-plugins-bad/ext/dash/8bd0b95@@gstdash@sha/gstdashsink.c.obj.rsp
C:\builds\ystreet\gst-plugins-base\gst-build\subprojects\gst-plugins-base\gst-libs\gst/pbutils/pbutils.h(30): fatal error C1083: Cannot open include file: 'gst/pbutils/pbutils-enumtypes.h': No such file or directory
* Remove redundant variables for width/height and par from GstD3D11Window.
GstVideoInfo holds all the values.
* Don't need to pass par to gst_d3d11_window_prepare().
It will be parsed from caps again
* Remove duplicated math
Fixing regression of the commit 9dada90108
Filter operates on raw data so don't allow decodebin to produce
encoded data if one is defined.
My use case here is keeping the video stream untouched but apply a filter
on the audio one, while keeping the same audio format.