When measuring video latency, one mechanism involves taking a photo
with a camera of two screens showing the test video overlayed with
timeoverlay or clockoverlay. In these cases, if the display's pixel
response time is crappy, you will see ghosting due to which it can be
quite difficult to discern what the current timestamp being shown is.
This commit adds a property that *also* shows the timestamp in
a different (sequentially predictable) location every frame, which
makes it easy to tell what the latest rendered timestamp is.
For bonus points, you can also use the fade-time of the previous frame
to measure with sub-framerate accuracy when the photo was taken, not
just clamped to the framerate, giving you a higher precision latency
value.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6935>
Some subtitle "decoders" had a wrong category of "Parser", which `parsebin`
relies on to identify elements which do not *decode* streams but *parse* them.
This would cause such subtitle decoders to be plugged in within parsebin,
preventing the original stream to be properly used by (more efficient)
downstream decoders or subtitle renderers.
Fixes#1757
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6153>
The fake video decoder ignores input bitstream except
to enforce caps restrictions. It reads video width,
height and framerate from caps. Then it just pushes
video frames without doing any decoding.
The fake video decoder just draws a snake moving from
left to right in the middle of the frame. This is a
light weight drawing while it still provides an idea
about how smooth is the rendering.
The fake video decoder inherits from GstVideoDecoder.
It is useful to measure how smooth will be the whole
rendering pipeline if you had the most efficient video
decoder. Also useful to bisect issues for example when
suspecting issues in a specific video decoder.
Handles mpeg2, mpeg4, h263, h264, theora, vp8, wmv3, msmpeg,
flash-video, vp6, vp9, wmv1, wmv2, divx but more can be
added if needed.
For now it can only output RGBA, RGBx, BGRA, BGRx.
Its rank is 0 (none) but I added a property to change it so
that it can be selected by decodebin.
gst-launch-1.0 fakevideodec rank=512 \
playbin uri=http://clips.vorwaerts-gmbh.de/big_buck_bunny.mp4http://bugzilla.gnome.org/show_bug.cgi?id=723778Closes#679
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5636>
Back in the mists of time[1], we switched `giostream*` elements to not close the
stream on stop() so that applications that needed a handle to the stream after
the element stopped had it.
Unfortunately, we also have cases[2] where waiting for the element to be
finalized is too late for the stream to be closed.
In order to not change the behaviour of the element, we add a property to allow
users to select the desired behaviour.
[1]: https://bugzilla.gnome.org/show_bug.cgi?id=587896
[2]: gst-plugins-rs#423
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5372>
These 10bit formats are identical to NV12_16L32S, but 64bytes of data is being
prefixed with 16bytes data with four pixels of lower 2bits per byte. For
MT2110T, the lower two bits set so each bytes contains a column of 4 pixels,
also describe as tiled lower 2 bits. MT2110T has been chosen as a name to match
the vendor chosen name. This format is unlikely to exist for other vendors.
For MT2110R, the 2 low bits are in raster order.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3444>
The current limit is `x10`, which allows just `+20 dB` of gain.
While it may seem sufficient, this came up as a problem
in a real-world, non-specially-engineered situation,
in strawberry's EBU R 128 loudness normalization.
(https://github.com/strawberrymusicplayer/strawberry/pull/1216)
There is an audio track (that was not intentionally engineered that way),
that has integrated loudness of `-38 LUFS`,
and if we want to normalize it's loudness to e.g. `-16 LUFS`,
which is a very reasonable thing to do,
we need to apply gain of `+22 dB`,
which is larger than `+20 dB`, and we fail...
I think it should allow at least `+96 dB` of gain,
and therefore should be at `10^(96/20) ~= 63096`.
But, i don't see why we need to put any specific restriction
on that parameter in the first place, other than the fact
that the fixed-point multiplication scheme does not support volume
larger than 15x-ish.
So let's just implement a floating-point fall-back path
that does not involve fixed-point multiplication
and lift the restriction altogether?
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5063>
Add support for generation of 10/12/14/16 bit bayer test pattern.
The implementation is rather simplistic, just take the ARGB
input, generate 16-bit data out of it instead of 8-bit, shift
them as required by the output bitness, and apply endian swap.
Example usage:
```
$ gst-launch-1.0 videotestsrc ! \
video/x-bayer,width=512,height=512,format=bggr12le ! \
bayer2rgb ! \
video/x-raw,format=RGBA64_LE ! \
videoconvert ! \
autovideosink
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4686>
Proxy the force-live and min-upstream-latency propertyies to the internal
glvideomixerelement at construction time. force-live has to be set
during construction of the glvideomixerelement, so that has to be
deferred until the _constructed() call. Make sure that all other
existing proxied properties will still get set once the element
is created.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4494>
Adding propose_allocation is to meet the requirement of Application to
request buffers. Application sometimes need to create buffer pool
and request buffers to maintain buffer management itself, and Gstreamer plugin
import Application's buffers to use. So, add propose_allocation in
appsink like waylandsink and kmssink etc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4185>