While gint64 and int64_t are always the same, clang does not agree with that.
/Applications/Xcode.app/Contents/Developer/usr/bin/make -C decklink
CXX libgstdecklink_la-gstdecklinkaudiosink.lo
gstdecklinkaudiosink.cpp:675:79: error: cannot initialize a parameter of type 'int64_t *' (aka 'long long *') with an rvalue of type 'gint64 *' (aka 'long *')
ret = buf->output->attributes->GetInt (BMDDeckLinkMaximumAudioChannels, &max_channels);
^~~~~~~~~~~~~
./linux/DeckLinkAPI.h:692:87: note: passing argument to parameter 'value' here
virtual HRESULT GetInt (/* in */ BMDDeckLinkAttributeID cfgID, /* out */ int64_t *value) = 0;
^
Scale down to milliseconds, otherwise at least some hardware has problems
scheduling the frames (or schedules them too slow) and we run out of available
frames.
https://bugzilla.gnome.org/show_bug.cgi?id=770282
We have no idea which timestamps they are supposed to have so the only thing
we can do at this point is to drop them. Packets without timestamps happen if
audio was captured but no corresponding video, which shouldn't happen under
normal circumstances.
https://bugzilla.gnome.org/show_bug.cgi?id=747633
When the mode of decklinkvideosink is set to "auto", the sink claims to
support the full set of caps that it can support for all modes. Then, every
time new caps are set, the sink will automatically find the correct mode for
these caps and set it.
Caveat: We have no way to know whether a specific mode will actually work for
your hardware. Therefore, if you try sending 4K video to a 1080 screen, it
will silently fail, we have no way to know that in advance. Manually setting
that mode at least gave the user a way to double-check what they are doing.
https://bugzilla.gnome.org/show_bug.cgi?id=759600
Otherwise we're going to return times starting at 0 again after shutting down
an element for a specific input/output and then using it again later.
https://bugzilla.gnome.org/show_bug.cgi?id=755426
We were converting all times to our internal running times, that is the time
the sink itself spent in PLAYING already. But forgot to do that for the
running time calculated from the buffer timestamps. As such, all buffers were
scheduled much later if the pipeline's running time did not start at 0.
This happens for example if a base time is explicitly set on the pipeline.
https://bugzilla.gnome.org/show_bug.cgi?id=754528
The autodetection mode was broken because a race condition in the input mode
setting. The mode could be reverted back when it was replaced in
the streaming thread by the old mode in the middle of mode changed callback.
Add the diff between the external time when we went to playing and
the external time when the pipeline went to playing. Otherwise we
will always start outputting from 0 instead of the current running
time.
gstdecklink.cpp: In member function 'virtual HRESULT GStreamerDecklinkInputCallback::VideoInputFrameArrived(IDeckLinkVideoInputFrame*, IDeckLinkAudioInputPacket*)':
gstdecklink.cpp:498:22: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (capture_time > m_input->clock_start_time)
^
gstdecklink.cpp:503:22: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (capture_time > m_input->clock_offset)
^
The driver has an internal buffer of unspecified and unconfigurable size, and
it will pull data from our ring buffer as fast as it can until that is full.
Unfortunately that means that we pull silence from the ringbuffer unless its
size is by conincidence larger than the driver's internal ringbuffer.
The good news is that it's not required to completely fill the buffer for
proper playback. So we now throttle reading from the ringbuffer whenever
the driver has buffered more than half of our ringbuffer size by waiting
on the clock for the amount of time until it has buffered less than that
again.
The ringbuffer's acquire() is too early, and ringbuffer's start() will only be
called after the clock has advanced a bit... which it won't unless we start
scheduled playback.
Not from the decklink clock. Both will return exactly the same time once the
decklink clock got slaved to the pipeline clock and received the first
observation, but until then it will return bogus values. But as both return
exactly the same values, we can as well use the pipeline clock directly.
Otherwise we might start the scheduled playback before the audio or video streams are
actually enabled, and then error out later because they are enabled to late.
We enable the streams when getting the caps, which might be *after* we were
set to PLAYING state.
Otherwise we might start the streams before the audio or video streams are
actually enabled, and then error out later because they are enabled to late.
We enable the streams when getting the caps, which might be *after* we were
set to PLAYING state.
This fixes handling of flushing seeks, where we will get a PAUSED->PLAYING
state transition after the previous one without actually going to PAUSED
first.
Otherwise we will overflow the internal buffer of the hardware
with useless frames and run into an error. This is necessary until
this bug in basesink is fixed:
https://bugzilla.gnome.org/show_bug.cgi?id=742916
decklinkvideosink might be added later to the pipeline, or its state might
be handled separately from the pipeline. In which case the running time when
we (last) went into PLAYING state will be different from the pipeline's.
However we need our own start time to tell the Decklink API, which running
time should be displayed at the moment we go to PLAYING and start scheduled
rendering.
No idea where the DecklinkAPIDispatch.cpp comes from on Windows,
but this should still work. Will just become a problem once we
use other parts of the API.