Add the diff between the external time when we went to playing and
the external time when the pipeline went to playing. Otherwise we
will always start outputting from 0 instead of the current running
time.
gstdecklink.cpp: In member function 'virtual HRESULT GStreamerDecklinkInputCallback::VideoInputFrameArrived(IDeckLinkVideoInputFrame*, IDeckLinkAudioInputPacket*)':
gstdecklink.cpp:498:22: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (capture_time > m_input->clock_start_time)
^
gstdecklink.cpp:503:22: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (capture_time > m_input->clock_offset)
^
The driver has an internal buffer of unspecified and unconfigurable size, and
it will pull data from our ring buffer as fast as it can until that is full.
Unfortunately that means that we pull silence from the ringbuffer unless its
size is by conincidence larger than the driver's internal ringbuffer.
The good news is that it's not required to completely fill the buffer for
proper playback. So we now throttle reading from the ringbuffer whenever
the driver has buffered more than half of our ringbuffer size by waiting
on the clock for the amount of time until it has buffered less than that
again.
The ringbuffer's acquire() is too early, and ringbuffer's start() will only be
called after the clock has advanced a bit... which it won't unless we start
scheduled playback.
Not from the decklink clock. Both will return exactly the same time once the
decklink clock got slaved to the pipeline clock and received the first
observation, but until then it will return bogus values. But as both return
exactly the same values, we can as well use the pipeline clock directly.
There is no reason to pre-roll more buffers here as we have our own ringbuffer
with more segments around it, and we can immediately provide more buffers to
OpenSL ES when it requests that from the callback.
Pre-rolling a single buffer before starting is necessary though, as otherwise
we will only output silence.
Lowers latency a bit, depending on latency-time and buffer-time settings.
4 is the "typical" number of buffers defined by Android's OpenSL ES
implementation, and its code is optimized for this. Also because we
have our own ringbuffer around this, we will always have enough
buffering on our side already.
Allows for more efficient processing.
The pseudo buffer pool code was using gst_buffer_is_writable()
alone to try and figure-out if cached buffer could be reused.
It needs to check for memory writability too. Also check map
result and fix map flags.
https://bugzilla.gnome.org/show_bug.cgi?id=734264
Use YUV instead of RGB textures, then convert using the new apple specific
shader in GstGLColorConvert. Also use GLMemory directly instead of using the
GL upload meta, avoiding an extra texture copy we used to have before.
When doing texture sharing we don't need to call CVPixelBufferLockBaseAddress to
map the buffer in CPU. This cuts about 10% relative cpu time from a vtdec !
glimagesink pipeline.
Otherwise we might start the scheduled playback before the audio or video streams are
actually enabled, and then error out later because they are enabled to late.
We enable the streams when getting the caps, which might be *after* we were
set to PLAYING state.
Otherwise we might start the streams before the audio or video streams are
actually enabled, and then error out later because they are enabled to late.
We enable the streams when getting the caps, which might be *after* we were
set to PLAYING state.
This API has been deprecated for eternities and microsoft
stopped shipping the headers in 2010 accoding to wikipedia,
so let's just remove it and focus on bringing the plugins
based on the newer APIs up to snuff.
This fixes handling of flushing seeks, where we will get a PAUSED->PLAYING
state transition after the previous one without actually going to PAUSED
first.
Otherwise we will overflow the internal buffer of the hardware
with useless frames and run into an error. This is necessary until
this bug in basesink is fixed:
https://bugzilla.gnome.org/show_bug.cgi?id=742916
decklinkvideosink might be added later to the pipeline, or its state might
be handled separately from the pipeline. In which case the running time when
we (last) went into PLAYING state will be different from the pipeline's.
However we need our own start time to tell the Decklink API, which running
time should be displayed at the moment we go to PLAYING and start scheduled
rendering.
... and hope that everything will be fine. This shouldn't really happen but
previously happened during shutdown. It should be fixed in videoencoder now,
but better be on the safe side here.
Use AVF provided timings to timestamp output buffers. Use the running time at
the time the first buffer is produced to base timestamps on. Report 1-frame
latency based on the negotiated framerate instead of hardcoding 4ms latency.
The property is in kbit/s and we store it in bit/s, so just multiply and
divide by 1000. No need to put a factor of 8 in there.
kVTCompressionPropertyKey_AverageBitRate is also in bit/s according to
its documentation.
This updates the dshowvideosink to work with the GStreamer 1.0.x APIs, and
avoids the use of deprecated GLib threading API that are now used since
GLib 2.32+
https://bugzilla.gnome.org/show_bug.cgi?id=699364
No idea where the DecklinkAPIDispatch.cpp comes from on Windows,
but this should still work. Will just become a problem once we
use other parts of the API.
Otherwise we're going to starve other elements if the decklink clock
is slower than the pipeline clock, or starts much later.
Of course this will still cause problems if the decklink clock and ours are
completely out of sync, or running at a very different rate. But this at least
works better now.
If we just count the frames and calculate timestamps from that, all frames
will arrive late in the sink as we have a live source here. Instead take
the pipeline clock at capture time as reference.
We have to handle the callback object a bit different:
a) it needs a virtual destructor
b) we need to set the callback to NULL when we're done with the output
c) create a new one every time
https://bugzilla.gnome.org/show_bug.cgi?id=740616
We will run into an assertion in set_caps() if we try to change
caps while the source is already running. Don't try to find new
caps in GstBaseSrc::negotiate() to prevent caps changes.
The object lock only protects the session, as we modify
the session from other threads when the bitrate property
is changed. Don't hold it much longer than for session
related things.
And we need to release the video decoder stream lock before
enqueueing a frames. It might wait for our callback to dequeue
a frame from another thread, which will then take the stream
lock too and deadlock.
It is not required on OSX apparently and was only added in 10.9.6 there.
Calculating the correct level from the configuration is not trivial, so let's
just not set a level at all here.
DVB-T2 supports 5, 10 and 1.712 MHz
Order of the enum values (new values after _AUTO)
has been kept congruent with the one in the v4l
API for consistency
Previously known as DMB-T/H, this is the
terrestial DTV broadcast standard currently
used by the People's Republic of China,
Hong Kong, Laos and Macau (officially),
and by Malaysia, Iraq, Jordan, Syria and
Lebanon (experimentally).
These apply to ISDB-T, DVB-T2 and DTMB
Order of the enum values (new rates after _AUTO)
has been kept congruent with the one in the v4l
API for consistency.
According to the v4l-dvb API docs, these are only
used for DVB-T2 at the moment.
Order of the enum values (new rates after _AUTO)
has been kept congruent with the one in the v4l
API for consistency.
iOS has special stride requirements that we don't know yet, so copy
input buffers into buffers allocated by iOS for now.
Later we should check the stride and probably provide a buffer pool for these
buffers so upstream can directly write in there.