Currently only does probing and does not handle messages from
endpoints/devices. In the future we want to do proper monitoring which
is well-supported in WASAPI.
https://bugzilla.gnome.org/show_bug.cgi?id=792897
We need to parse the WAVEFORMATEXTENSIBLE structure, figure out what
positions the channels have (if they are positional), and reorder them
as necessary.
https://bugzilla.gnome.org/show_bug.cgi?id=792897
According to the vp8 spec, the first partition (size can be derived from
the frame header) should have all compressed header information and we
implemented gst codecparser based on that. But it doesn't seem to be the
case with some of the streams (#792773) and libvpx
works fine because it uses the whole frame size (not the first partition
size) to initialize the bool decoder.
https://bugzilla.gnome.org/show_bug.cgi?id=792773
We call the base class first as this will remove the pad from
the aggregator, thus stopping misc callbacks from being called,
one of which (process_textures) will recreate the vertex_buffer
if it is destroyed
https://bugzilla.gnome.org/show_bug.cgi?id=760873
For libsrtp 1, add defines that translate the new namespaced identifiers
to the old unnamespaced ones. Also move the code for setting and getting
a stream's ROC into two compat functions that match libsrtp2's API.
It seems that libsrtp2 properly supports changing the ROC without having
to touch the sequence numbers afterwards, given that srtp_set_stream_roc
sets a pending_roc field, so the entire roc_changed dance should not be
needed anymore. The compat functions for libsrtp 1 just contain our
preexisting hacks, however, so it's still needed there.
libsrtp2 has no means of discovering the streams in the session, so to
create the stats structure we need to iterate over our own set of SSRCs.
For this we also need to re-add the previously removed ssrcs_set to the
encoder.
https://bugzilla.gnome.org/show_bug.cgi?id=776901
Fix regression when used in combination with new flvmux which was
ported to GstAggregator, and which sends plain video/x-flv caps
before sending full caps that include streamheaders.
This information could be used for example to pick a decoder supporting
a specific chroma and/or bit depth, like 4:2:2 10 bits.
It can also be used to inform earlier decoder about the format it is
about to decode.
https://bugzilla.gnome.org/show_bug.cgi?id=792039
There is no fixed limitation for the number of devices on the
decklink API side according to BlackMagic. Many PC motherboards
are able support 6 decklink cards each with up to 8 inputs so
a limit of 16 might well be too low.
https://bugzilla.gnome.org/show_bug.cgi?id=777239
Both the source and the sink elements were broken in a number of ways:
* prepare() was assuming that the format was always S16LE 2ch 44.1KHz.
We now probe the preferred format with GetMixFormat().
* Device initialization was done with the wrong buffer size
(buffer_time is in microseconds, not nanoseconds).
* sink_write() and src_read() were just plain wrong and would never
write or read anything useful.
* Some functions in prepare() were always returning FALSE which meant
trying to use the elements would *always* fail.
* get_caps() and delay() were not implemented at all.
TODO: support for >2 channels
TODO: pro-audio low-latency
TODO: SPDIF and other encoded passthroughs
Three new properties are now implemented: role, mute, and device.
* 'role' designates the stream role of the initialized device, see:
https://msdn.microsoft.com/en-us/library/windows/desktop/dd370842(v=vs.85).aspx
* 'device' is a system-wide GUIDesque string for a specific device.
* 'mute' is a sink property and simply mutes it.
On my Windows 8.1 system, the lowest latency that works is:
wasapisrc buffer-time=20000
wasapisink buffer-time=10000
aka, 20ms and 10ms respectively. These values are close to the lowest
possible with the IAudioClient interface. Further improvements require
porting to IAudioClient2 or IAudioClient3.
https://docs.microsoft.com/en-us/windows-hardware/drivers/audio/low-latency-audio
Instead of a massive if/else/if/else/if/else/...:
* Use a common cleanup path for allocated items just before leaving
the function (which will be free-d only if we're not dealing with
a delayed SPU).
* "goto" that cleanup path wherever needed
CID #1427096
CID #1427114
Sometimes we might get an audio packet without a corresponding video
frame. In these cases, the stream and hardware reference timestamps
would be missing, because they're called on the video frame. Instead of
potentially breaking stuff downstream that might depend on these, we now
extrapolate them.
https://bugzilla.gnome.org/show_bug.cgi?id=792042
When we receive a video or audio buffer, we calculate the next stream
time based on the current stream time + buffer duration. If the next
buffer's stream time is after that, we issue a warning.
This happens because the stream time incoming from Decklink should be
really constant and without gaps. If there is a gap, it means that
something went wrong, e.g. the internal buffer pool is empty (too many
buffers queued up downstream).
https://bugzilla.gnome.org/show_bug.cgi?id=781776