Unless the pipeline is in certain modes, we do not want to try and link
every track. The previous debug message implied this, but the method did
not actually end early.
Also, we always end early if we receive a track that is neither video
nor audio.
Export GES library API in headers when we're building the
library itself, otherwise import the API from the headers.
This fixes linker warnings on Windows when building with MSVC.
Fix up some missing config.h includes when building the lib which
is needed to get the export api define from config.h
Fixes https://gitlab.freedesktop.org/gstreamer/gst-editing-services/issues/42
In the rendering case we were getting random issues and often the
pipeline was not be able to preroll as some pad were not linked inside
encodebin.
https://bugzilla.gnome.org/show_bug.cgi?id=795422
In playsink the default video queue max size is 3 buffers, which is
sometimes not enough for our use case.
Allow up to 2 seconds of buffered data, giving us more time to do
the transition between clips, and thus avoiding dropping frames in
the sink when bringing up new clip takes too much time.
Differential Revision: https://phabricator.freedesktop.org/D1854
Summary:
The user might want to render only some media type of the timeline,
for example he wants to only render the audio part of the timeline.
It was failing as we were not connecting the track but were still trying
to 'render' it.
Depends on D153
Reviewers: Mathieu_Du
Reviewed By: Mathieu_Du
Differential Revision: http://phabricator.freedesktop.org/D154
g-ir-scanner includes section docs as class/interface docs if the section name is equal to the lowercase type name.
Since all the documentation is in section blocks, rename them to match the type names.
https://bugzilla.gnome.org/show_bug.cgi?id=727776
We currenty do not support multiple tracks with same type in GESPipeline
and we actually need to set the presence field to avoid a scenario where
we have only video in a video track, and no audio in the audio track. So
audiotestsrc is used and we end up encoding the whole audio stream but
no decoded video frame as reached the decodebin src pad, so the pad
has not been created and thus it will not be linked to the encodebin.
On the audio part, the EOS will be emitted so fast that the resulting stream will
not have any video in it as the muxer will not even have a video pad created.
Setting the presence will ensure that the muxer does have a video pad
(because of how encodebin behaves) and thus will create a pad for it
and wait for its EOS.
We actually do not need it has everywhere where we would need it we are
already locked against the timeline.dyn_lock, we need to make sure it is
always the case in the future.
The hierarchy of the mutex was wrong and could possibly lead to
deadlocks
Setting the profile on an encodebin can fail, and if that happens, there
will be no profile set at all, we should return FALSE in GESPipeline
when that happens