Commit graph

4 commits

Author SHA1 Message Date
Seungha Yang e82f1f8b0b mfvideoenc: Re-define default GOP size value
The behavior for zero AVEncMPVGOPSize value would be
varying depending on GPU vendor implementation and some
GPU will produce keyframe only once at the beginning of encoding.
That's unlikely expected result for users.

To make this property behave consistently among various GPUs,
this commit will change default value of "gop-size" property to -1
which means "auto". When "gop-size" is unspecified, then
mfvideoenc will calculate GOP size based on framerate
like that of our x264enc implementation.

See also
https://docs.microsoft.com/en-us/windows/win32/directshow/avencmpvgopsize-property

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1911>
2020-12-27 03:58:39 +09:00
Seungha Yang 4b522dd355 mfvideoenc: Remove duplicated class registration code
Each codec subclass has the same code for class/element registration,
so we can move the code into one helper methodm and that will make
future enhancement simple.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1909>
2020-12-26 14:25:06 +00:00
Seungha Yang 6195fcf857 mfvideoenc: Improve latency performance for hardware encoder
Unlike software MFT (Media Foundation Transform) which is synchronous
in terms of processing input and output data, hardware MFT works
in asynchronous mode. output data might not be available right after
we pushed one input data into MFT.
Note that async MFT will fire two events, one is "METransformNeedInput"
which happens when MFT can accept more input data,
and the other is "METransformHaveOutput", that's for signaling
there's pending data which can be outputted immediately.

To listen the events, we can wait synchronously via
IMFMediaEventGenerator::GetEvent() or make use of IMFAsyncCallback
object which is asynchronous way and the event will be notified
from Media Foundation's internal worker queue thread.

To handle such asynchronous operation, previous working flow was
as follows (IMFMediaEventGenerator::GetEvent() was used for now)
- Check if there is pending output data and push the data toward downstream.
- Pulling events (from streaming thread) until there's at least
  one pending "METransformNeedInput" event
- Then, push one data into MFT from streaming thread
- Check if there is pending "METransformHaveOutput" again.
  If there is, push new output data to downstream
  (unlikely there is pending output data at this moment)

Above flow was processed from upstream streaming thread. That means
even if there's available output data, it could be outputted later
when the next buffer is pushed from upstream streaming thread.
It would introduce at least one frame latency in case of live stream.

To reduce such latency, this commit modifies the flow to be fully
asynchronous like hardware MFT was designed and to be able to
output encoded data whenever it's available. More specifically,
IMFAsyncCallback object will be used for handling
"METransformNeedInput" and "METransformHaveOutput" events from
Media Foundation's internal thread, and new output data will be
also outputted from the Media Foundation's thread.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1520>
2020-12-19 18:56:33 +00:00
Seungha Yang 9625d19279 mediafoundation: Add h264 encoder
Add Media Foundation h264 encoder. If hardware encoders are available
on system, they will have higher rank than software encoder.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/760>
2020-04-28 14:37:31 +00:00