For a single format in the caps, we were returning some weird answers,
like only RGB formats for a RGB input when we can also support YUV from
RGB.
Fixup of 3cfff727b1 where I thought my
previous (~6month) self had got this right. Don't trust your previous
self people!
Also yield common options to the outer project (gst-build in our case)
so that they don't have to be set manually and use array types for some
options.
Otherwise subclasses might accidentially use the old audioinfo/caps.
None of the subclasses currently uses the audioinfo/caps, but future
subclasses might.
https://bugzilla.gnome.org/show_bug.cgi?id=795827
Instead of having special cases at each GL texture creation, upload,
readback or copy for all non-8-bits-per-components.
Simply store the more specific format and retrieve the generic
component/type tuple from that.
Introduce a helper function for retrieving the generic GL format (RGBA,
RGB, RG, R, L, A) and type (BYTE, SHORT, SHORT_5_6_5) from a sized
GL format enum (RGBA8, RGB565, RG8, etc).
It performs similarly to the existing alpha element however performs
calculations in floating point rather than with small (guint8) integers
so some differences are to be expected.
https://bugzilla.gnome.org/show_bug.cgi?id=794070
The problem is that even though the functions we are calling are
in-place transformation, orc automatically puts the restrict keyword
on all arguments. To silence that warning just create yet-another
variable containing the same value.
https://bugzilla.gnome.org/show_bug.cgi?id=795765
It is possible that both application and the stream are waiting
currently, if for example the following happens:
1) app is waiting because no buffer in appsink
2) appsink providing a buffer and waking up app
3) appsink getting another buffer and waiting because it's full now
4) app thread getting back control
Previously step 4 would overwrite that the appsink is currently waiting,
so it would never be signalled again.
https://bugzilla.gnome.org/show_bug.cgi?id=795551
Instead of trying to guess what profile to build, just get the possible
elements to use with the specified caps and determine the
EncodingProfile from it.
https://bugzilla.gnome.org/show_bug.cgi?id=795490
With the way caps negotiation work in encoders, the only way to ensure
that no downstream renegotiation is done in the encoder is to also lock
upstream caps. Anyway with the current behavior upstream of encoders
*require* to handle any file format so locking upstream format should
be safe.
https://bugzilla.gnome.org/show_bug.cgi?id=795464
When frames are dropped or reordered then the serialized events are
collected and pushed with the next frame. This test verifies that the
order is preserved.
https://bugzilla.gnome.org/show_bug.cgi?id=794192
If we can guarantee the lifetime of the fd is longer than
the memory, we can use DONT_CLOSE flag not to close when release.
But it's not provided in gstdmabuf yet while gstfdmemory does.
For example, in case of using VA-API or MSDK, we would need this api.
Otherwise we should call dup to duplicate the fd.
https://bugzilla.gnome.org/show_bug.cgi?id=794829
This pixel format is a fully packed variant of NV12, a luma
pixel would take 10bits in memory, without any filled bits
between pixels in a stride. The color range follows
the BT.2020 standard.
In order to get a performance in hardware memory
operation, it may expend the stride, append zero data at the
end of echo lines.
Signed-off-by: ayaka <ayaka@soulik.info>
https://bugzilla.gnome.org/show_bug.cgi?id=795462
In the situation described in
https://bugzilla.gnome.org/show_bug.cgi?id=795397,
downstream_caps consists of two structures, the first with
the preferred rate, if at all possible (44100), the second
containing the full range of allowed rates, as audioresample
correctly tries to negotiate passthrough caps.
As audioaggregator cannot perform rate conversion, it wants
to return a fixated rate in its getcaps implementation,
however it previously directly used the first structure in
the caps allowed downstream, without taking the filter into
consideration, to determine the rate to fixate to.
With this, we first intersect our downstream caps with the
filter, in order not to fixate to an unsupported rate.
Otherwise it won't be possible to specify some profiles such as
video/x-h264,profile=(string)high-4:4:4
With this patch, we can do
video/x-h264,profile=(string)high-4\:4\:4
The default implementation for packet loss handling previously
always sent a gap event.
While this is correct as long as we know the packet that was
lost was actually a media packet, with ULPFEC this becomes
a bit more complicated, as we do not know whether the packet
that was lost was a FEC packet, in which case it is better
to not actually send any gap events in the default implementation.
Some payloaders can be more clever about, for example VP8 can
use the picture-id, and the M and S bits to determine whether
the missing packet was inside an encoded frame or outside,
and thus whether if it was a media packet or a FEC packet,
which is why ulpfecdec still lets these lost events go through,
though stripping them of their seqnum, and appending a new
"might-have-been-fec" field to them.
This is all a bit terrible, but necessary to have ULPFEC
integrate properly with the rest of our RTP stack.
https://bugzilla.gnome.org/show_bug.cgi?id=794909
Otherwise decodebin won't get notified about STREAM_COLLECTION comming
from the sources and thus will never get informored about it. Without
being informed about the stream collection decodebin won't be able to
select any streams. It ends up not creating any output for the streams
defined from outside parserbin.
https://bugzilla.gnome.org/show_bug.cgi?id=795364