There was not handling the end of encoding sequence in encoder.
This patch does drain any remaining internal streams while decoder
already does this.
Document says:
"To mark the end of the encoding sequence, call this function with a
NULL surface
pointer. Repeat the call to drain any remaining internally cached
bitstreams—one
frame at a time—until MFX_ERR_MORE_DATA is returned."
https://bugzilla.gnome.org/show_bug.cgi?id=793236
Sometimes parent context is released before its children get released.
In this case MFXClose of parent session fails.
To make sure that child sessions are closed before closing a parent
session,
Parent context needs to manage child sessions and close them first when
it's released.
https://bugzilla.gnome.org/show_bug.cgi?id=793412
Currently a gst buffer has one mfxFrameSurface when it's allocated and
can't be changed.
This is based on that the life of gst buffer and mfxFrameSurface would
be same.
But it's not true. Sometimes even if a gst buffer of a frame is finished
on downstream,
mfxFramesurface coupled with the gst buffer is still locked, which means
it's still being used in the driver.
So this patch does this.
Every time a gst buffer is acquired from the pool, it confirms if the
surface coupled with the buffer is unlocked.
If not, replace it with new unlocked one.
In this way, user(decoder or encoder) doesn't need to manage gst buffers
including locked surface.
To do that, this patch includes the following:
1. GstMsdkContext
- Manages MSDK surfaces available, used, locked respectively as the
following:
1\ surfaces_avail : surfaces which are free and unused anywhere
2\ surfaces_used : surfaces coupled with a gst buffer and being used
now.
3\ surfaces_locked : surfaces still locked even after the gst buffer
is released.
- Provide an api to get MSDK surface available.
- Provide an api to release MSDK surface.
2. GstMsdkVideoMemory
- Gets a surface available when it's allocated.
- Provide an api to get an available surface with new unlocked one.
- Provide an api to release surface in the msdk video memory.
3. GstMsdkBufferPool
- In acquire_buffer, every time a gst buffer is acquired, get new
available surface from the list.
- In release_buffer, it confirms if the buffer's surface is unlocked or
not.
- If unlocked, it is put to the available list.
- If still locked, it is put to the locked list.
This also fixes bug #793525.
https://bugzilla.gnome.org/show_bug.cgi?id=793413https://bugzilla.gnome.org/show_bug.cgi?id=793525
Directsoundsrc/sink have multiple issues, most of which cannot be
fixed at all because the API is deprecated and is implemented as a
compatibility wrapper around WASAPI since Vista.
Users and developers should now use the wasapisrc/sink elements, and
future development efforts should go towards that.
The low-latency property is *always* safe to enable, so applications
that do realtime communication should set it, and the elements will
automatically configure WASAPI to use the lowest possible device
period, and the audioringbuffer in audiobasesink will also be
configured accordingly.
Applications can also use exclusive mode during capture and playback
for the lowest possible latency if they know that the device will not
be used by any other application.
In this mode, the latency-time and buffer-time properties will be
completely ignored.
The AudioClient3 API is only available on Windows 10, and we will
automatically detect when it is available and use it.
However, using it for capturing audio with low latency and without
glitches seems to require setting the realtime priority of the entire
pipeline to "critical", which we cannot do from inside the element.
Hence, we can only enable that by default for wasapisink since
apps should be able to safely set the low-latency property to TRUE if
they need low-latency capture or playback.
This allows us to request ultra-low-latency device periods even in
shared mode. However, this requires good drivers and Windows 10, so
we only enable this when we detect that we are running on Windows 10
at runtime.
You can forcibly disable this feature on Windows 10 by setting
GST_WASAPI_DISABLE_AUDIOCLIENT3=1 in the environment.
Since there is already an "adaptive-B" option, just
use boolean property for B-pyramid enabling.
Fixme: Not sure whether this can be supported in vp8 and vp9.
It could be possible through GPB (b without backward ref) but
can't verify currently. We can move this as common property
once verified with vp8 and vp9 without breaking any backward
compatibility.
https://bugzilla.gnome.org/show_bug.cgi?id=791637
Add a new property "trellis" to enable trellis quantization.
Keeping trellis as a flag value (which is boolean for gst x264 enc element)
since it is possible to enable/disable this seperately for
I,P and B frames through MediaSDK ext option headers.
The subclass implementations always need to inform base-encoder
if it requires the inclusion of Extend Header buffers (mfxExtCodingOption2
and mfxExtCodingOption3).
https://bugzilla.gnome.org/show_bug.cgi?id=791637
This option controls down sampling in look ahead bitrate
control mode. According to spec it is only supported in AVC.
Fixme: Probably HEVC also have support for this in recent
MSDK versions. We could move the enumeration types to common
header usable for multiple codecs.
https://bugzilla.gnome.org/show_bug.cgi?id=791637
MediaSDK has support for a number of rate control algorithms.
Adding all possible options to the property rate-control.
Fixme1: In case of failure, currently we don't have a proper method
to show which rate-control has been failed. It could be better
to add some extensive validation on EncQuery output in case of error.
Unfortunately, not all ratecontrol methods are supported by every codecs
and we don't have the dynamic detection of supported ratecontrol methods yet.
https://bugzilla.gnome.org/show_bug.cgi?id=791637
We have the property "i-frames" to set the IDR interval in a
gop. Unfortunately MSDK HEVC encoder behaves bit differently
for IdrInterval field, IdrInteval == 1 indicate every
I-frame should be an IDR (which is IdrInterval == 0 for other codecs),
IdrInteval == 2 means every other I-frame is an IDR
(which is IdrInterval == 1 for other codecs) etc.
So we generalize the behaviour of property "i-frames" by
incrementing the value by one in each case (only for HEVC).
https://bugzilla.gnome.org/show_bug.cgi?id=791637
The base encoder common properties are not valid for
mjpeg encoder where there is no motion compensation or rate control.
Delaying the property installation on the base gobject
untill the subclass class_init get invoked.
https://bugzilla.gnome.org/show_bug.cgi?id=791637
The gst-msdk decoders prefer packetized streams as input
and in this case we can avoid unnecessary input bitstream copy
to mfxBitstream. This works fine for codecs like h264 where
we only support byte-stream with au alignment. Other format
conversions should be done thorugh parsers. But this won't work
for codecs like vc1 where we don't have an autoplugged parser.
Even the parser is not capable to do format conversions.
Packetizing through base decoders parse() routine will bring a
lot of uncecessary of complexities and codecparser libraray dependency.
So we just use an interal gst_adaper to keep track of bitstream
which is not consumed by msdk durig AsynchronusDecoding.
This adapter will get used only if subclass implementations
set the "is_packetized" to FALSE for msdk base encoder.
https://bugzilla.gnome.org/show_bug.cgi?id=792589
Adding Simple and Main profiles decode support.
Currently msdkvc1dec is not capable to handle the codec_data,
only instream headers are supported. Also msdk vc1 decoder
expecting instream with Sequence header as per SMPTE 421M Annex L.
Most of the decdoebin/playbin pipeline won't work with the above
constraints
because vc1parse is still not an autoplug element.
Only way to make mskdvc1dec work is by connecting a vc1parse
as an upstream element.
https://bugzilla.gnome.org/show_bug.cgi?id=792589
Use drm render node as the first choice of device node file.
Fall backs to use drm primary (/dev/dri/card[0-9])
if there is no render node available
Basic logic is inherited from gstreamer-vaapi, but using
gudev API rather than libudev directly.
Added gudev library as dependency for msdk.
https://bugzilla.gnome.org/show_bug.cgi?id=791599
1\ If downstream's pool is MSDK bufferpool,
2\ If there's shared GstMsdkContext in the pipeline,
a decoder decides to use video memory.
This policy should be improved to handle more cases.
https://bugzilla.gnome.org/show_bug.cgi?id=790752
In case that pipeline is like ".. ! decoder ! encoder ! ..." with using
video memory,
decoder needs to know the async depth of the following msdk element so
that it could
allocate the correct number of video memory.
Otherwise, decoder's memory is exhausted while processing.
https://bugzilla.gnome.org/show_bug.cgi?id=790752
How to share/create GstMsdkcontext is the following:
- Search GstMsdkContext if there's in the pipeline.
- If found, check if it's decoder, encoder or vpp by job type.
- If it's same job type, it creates another instance of
GstMsdkContext
with joined-session.
- Otherwise just use the shared GstMsdkContext.
- If not found, just creates new instance of GstMsdkContext.
https://bugzilla.gnome.org/show_bug.cgi?id=790752
According to the driver's instruction, if there are two or more encoders
or decoders in a process, the session should be joined by
MFXJoinSession.
To achieve this successfully by GstContext, this patch adds job type
specified if it's encoder, decoder or vpp.
If a msdk element gets to know if joining session is needed by the
shared context,
it should create another instance of GstContext with joined session,
which
is not shared.
https://bugzilla.gnome.org/show_bug.cgi?id=790752
1\ In decide_allocation, it makes its own msdk bufferpool.
- If downstream supports video meta, it just replace it with the msdk
bufferpool.
- If not, it uses the msdk bufferpool as a side pool, which will be
decoded into.
and will copy it to downstream's bufferpool.
2\ Decide if using video memory or system memory.
- This is not completed in this patch.
- It might be decided in update_src_caps.
- But tested for both system memory and video memory cases.
https://bugzilla.gnome.org/show_bug.cgi?id=790752