This is a live source, therefore:
* Use GST_FORMAT_TIME as the default format
* set_timestamp to True
* properly implement query latency.
This allows expected live usage like : playbin2 uri=dv://
flvmux only accepts raw audio in little endian, but audiotestsrc
produces audio in the native endianness, which makes linking
between audiotestsrc and flvmux fail on big endian machines. Add
an audioconvert element in between the two to fix this.
In ID3 v2.3 compressed frames will have a 4-byte data length indicator
after the frame header to indicate the size of the decompressed data.
This integer is unlikely to be a sync-safe integer for v2.3 tags,
only in v2.4 it's sync-safe.
Reversing the unsynchronisation seems to work slightly differently
for ID3 v2.3 tags and v2.4 tags: v2.3 tags don't have syncsafe frame
sizes in the frame header, so the unsynchronisation is applied to
the whole frame data including all the frame headers. v2.4 frames
have sync-safe sizes, however, so the unsynchronisation only needs
to be applied to the actual frame data, and it seems that's what's
being done as well. So we need to undo the unsynchronisation on a
per-frame basis for v2.4 tags for things to work properly.
Fixes extraction of coverart/images from APIC frames in ID3 v2.4
tags (#588148).
Add unit test for this as well.
Set the default slave method to the much better skew algorithm. This is the
default in the new base class but we override this here as well for the
upcomming release.
It seems to cause strange occasional high latencies (almost 200ms) when dequeuing buffers from _buffer_alloc(). It is simpler and seems to work much better to dqbuf from the same thread that is queuing the next buffer.
This also does the following changes:
(1) pull the bufferpool code out into gstv4l2bufferpool.c, and make a
bit more generic so it can be used both for v4l2src and v4l2sink
(2) move some of the device probing/configuration/caps stuff into
gstv4l2object.c so it does not have to be duplicated between
v4l2src and v4l2sink
Fixes bug #590280.
Whenever we see a gap, we flush the temporary packets (but not the adapter). If we
had some data temporarily stored it will be outputted (the sound will sound a bit
garbled... but that's how it sounds on MacOSX :)
Reverse-engineered by comparing:
* A rtp hinted file provided by DarwinStreamingServer
* The output procued by DSS for that same file
Also used various streaming sources available on the internet to fine-tune
the code.
The header/codec_data extraction methods are from FFMpeg (LGPL).
Request pads are removed by the element instance in PAUSED->READY
so we need to re-request pads for every run and link them again.
Last fix for bug #590447.
Use some of the SDP attributes when they are present to specify the output
dimension and framerate. This allows us to receive jpeg frames larger than
2040 width/height.
Fixes#564437
Otherwise that code will just be expanded to nothing when compiled
-DG_DISABLE_ASSERT (PS: why is mainloop_start() called in the init
function and not when changing state to READY?)
For some reason flac doesn't call our metadata callback when we operate
in push mode with unframed input, but that's where we set up the
newsegment event (since that's where we'd get the duration from the
stream info header), so we didn't send a newsegment event at all in this
case. Hack around this by storing a generic newsegment event for now
which will be used if we don't replace it with a better one that
includes the duration.