gst_query_set_nth_allocation_pool() requires there to be a pool in the
query already. This is not always the case when we get the query from
upstream. Use gst_query_add_allocation_pool() instead in such case.
https://bugzilla.gnome.org/show_bug.cgi?id=681719
When we get a new buffer, always call the parse function, even if it is a 0
sized buffer. For theora we need to also decode 0 sized buffers.
Ideally we would like to make theoradec be packetized but that fails currently
because of oggdemux and because of the assumptions that the base class makes.
Instead, remember we need a keyframe, and we will force the encoder
to emit one next time we submit a new frame.
Since libtheora does not have an API to request a keyframe, we reset
the max keyframe interval to 1 temporarily.
This has the advantage that the rate control keeps its history,
and that the encoder won't choose different quant tables or
somesuch, thus requiring new streamheaders (although this is
probably only a theoretical possibility). Should also be a
bit faster than resetting the encoder.
https://bugzilla.gnome.org/show_bug.cgi?id=663350
Clamp timestamp interpollation to 0 to avoid going negative. This should not
happen, really, but until the interpolation is improved this seems better.
This allows getting a pad for a specific encoding profile, which can
be useful when there are several stream profiles of the same type.
Also update the encodebin unit tests so that we check that the returned
pad has the right caps.
https://bugzilla.gnome.org/show_bug.cgi?id=689845
Before it was done the other way around and that can trigger the assert that
already is in place. This also makes more sense; when seeking to time x, we want
then sample that is <= that pos.
This allows elements to specify a function to upload
a buffer content to a specific OpenGL texture ID. It
could be used by the vaapi elements to provide a way
for eglglessink or WebKit to upload a VA surface to
an GL texture without the respective sinks knowing
anything about VA.
Try to select the conversion that would result in the minimal amount of quality
loss. Quality loss is calculated rather arbitrarily but it avoids doing
something really stupid in most cases.
ringbuffer was released after setting values to its spec field
in gst_audio_base_src_setcaps(). This led to failure in case
gst_audio_base_src_setcaps() is called more than one time.
https://bugzilla.gnome.org/show_bug.cgi?id=696540
This reverts commit adc9694ed7.
No need to restrict the conversion, we can handle interlace correctly. We
basically unpack each field, then convert each field to the target colorspace
and pack and interleave each field to the target format. We also disable any
fast path that can't deal with interlaced formats.
Do not use the buffer start offset when it is invalid, otherwise a
discontinuity is detected on the next buffer, and the subtitle parser
reset and some subtitle lines are not shown.
Also remove unused next_offset field.
https://bugzilla.gnome.org/show_bug.cgi?id=693981