gst_video_decoder_negotiate_default_caps() is meant to pick a default output
format when we need one earlier because of an incoming GAP.
It tries to use the input caps as a base if available and fallback to a default
format (I420 1280x720@30) for the missing fields.
But the framerate and pixel-aspect were not explicitly passed to
gst_video_decoder_set_output_state() which is solely relying on the input format
as reference to get the framerate anx pixel-aspect-ratio.
So there is no need to manually handling those two fields as
gst_video_decoder_set_output_state() will already use the ones from
upstream if available, and they will be ignored anyway if there are not.
This also prevent confusing debugging output where we claim to use a
specific framerate while actually none was set.
gstrtspconnection.c: In function ‘writev_bytes’:
gstrtspconnection.c:1348:10: error: ‘res’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
return res;
^
When using multichannel audio data and being needed to reorder channels,
audio data is not copied correctly because destination address of
memcpy is wrong.
For example, the following command
$ gst-launch-1.0 pulsesrc ! audio/x-raw,channels=6,format=S16LE ! filesink location=test.raw
will reproduce this issue if there is 6-ch audio input device.
This commit fixes that.
The detailed process of this issue is as follows:
1. gst-launch-1.0 calls gst_pulsesrc_prepare (gst-plugins-good/ext/pulse/pulsesrc.c)
1466 gst_pulsesrc_prepare (GstAudioSrc * asrc, GstAudioRingBufferSpec * spec)
1467 {
(skip...)
1480 {
1481 GstAudioRingBufferSpec s = *spec;
1482 const pa_channel_map *m;
1483
1484 m = pa_stream_get_channel_map (pulsesrc->stream);
1485 gst_pulse_channel_map_to_gst (m, &s);
1486 gst_audio_ring_buffer_set_channel_positions (GST_AUDIO_BASE_SRC
1487 (pulsesrc)->ringbuffer, s.info.position);
1488 }
In my environment, after line 1485 is processed, position of spec and s are
spec->info.position[0] = 0
spec->info.position[1] = 1
spec->info.position[2] = 2
spec->info.position[3] = 6
spec->info.position[4] = 7
spec->info.position[5] = 8
s.info.position[0] = 0
s.info.position[1] = 6
s.info.position[2] = 2
s.info.position[3] = 1
s.info.position[4] = 7
s.info.position[5] = 8
The values of spec->info.positions equal
GST_AUDIO_BASE_SRC(pulsesrc)->ringbuffer->spec->info.positions.
2. gst_audio_ring_buffer_set_channel_positions calls
gst_audio_get_channel_reorder_map.
3. Arguments of gst_audio_get_channel_reorder_map are
from = s.info.position
to = GST_AUDIO_BASE_SRC(pulsesrc)->ringbuffer->spec->info.positions
At the end of this function, reorder_map is set to
reorder_map[0] = 0
reorder_map[1] = 3
reorder_map[2] = 2
reorder_map[3] = 1
reorder_map[4] = 4
reorder_map[5] = 5
4. Go back to gst_audio_ring_buffer_set_channel_positions and
2065 buf->need_reorder = TRUE;
is processed.
5. Finally, in gst_audio_ring_buffer_read,
1821 if (need_reorder) {
(skip...)
1829 memcpy (data + i * bpf + reorder_map[j] * bps, ptr + j * bps, bps);
is processed and makes this issue.
Otherwise we would return EOF if nothing was written in any case, even
if this was actually a case of TIMEOUT or EWOULDBLOCK for example.
Thanks to Edward Hervey for debugging and finding this issue.
Fixes 2 problems:
1) Number of unmapped memories does not always match number of mmaped ones in
dispatch_write().
2) When dispatch_write() is dispatched second time after an incomplete write,
already set offsets will not be taken into account, thus corrupt RTP data will
be sent.
This makes it unnecessary for callers to first merge together all
memories, and it allows API like GstRTSPConnection to write them out
without first copying all memories together or using writev()-style API
to write multiple memories out in one go.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/370
g_source_remove() works only for a GSource which was attached
to default GMainContext, but the GSource might be attached to
custom context depending on how gst_discoverer_start() was called.
Whatever the attached context was, g_source_destroy() can clean it up.
Make consistent with what autotools puts into enabled_gl_apis
variable. Autotools puts 'gl' in there instead of 'opengl'.
This would cause problems when building -bad glmixers plugin
in meson against a -base that was built with autotools.
See https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/871
Otherwise surface_width/surface_height stored in GstGLWindowPrivate
isn't changed, sometimes an unnecessary reconfigure event is sent on
sinkpad, then result in upstream reconfiguring.
Example pipeline:
gst-launch-1.0 videotestsrc ! msdkvpp ! glimagesink
The start_time and end_time in this context have already
been adjusted for the input's rate by converting them to running
time above. What is needed afterwards is to compare these
with the output's start/stop running time, which also takes
into account the rate, so we are comparing equal things.
Multiplying these with the output's rate here is only breaking
this logic. In most cases the input and output rate is the same,
so this multiplication effectively reverses the rate adjustment
that happened while converting to running time, which is why
we see the video playing with the original rate in tests.
Fixes#541
Binding the vertex array to 0 will unbind everything else already.
In the previous order older versions of the Intel GL driver caused
errors to be printed for every single call when disabling the vertex
attrib arrays after binding the vertex array to 0.
We make an allocator for temporary lines and then use this for all
the steps in the conversion that can do in-place processing.
Keep track of the number of lines each step needs and use this to
allocate the right number of lines.
Previously we would not always allocate enough lines and we would
end up with conversion errors as lines would be reused prematurely.
Fixes#350
ISO 14496-3 defines that audioObjectType 5 is a special case that
indicates SBR is present and that an additional field has to be
parsed to find the true audioObjectType.
There are two ways of signaling SBR within an AAC stream - implicit
and explicit (see [1] section 4.2). When explicit signaling is used,
the presence of SBR data is signaled by means of the SBR
audioObjectType in the AudioSpecificConfig data.
Normally the sample rate is specified by an index into a
table of common sample rates. However index 0x0f is a special case
that indicates that the next 24 bits contain the real sample rate.
[1] https://www.telosalliance.com/support/A-closer-look-into-MPEG-4-High-Efficiency-AACFixes#39
Checking the address distance between given begin/end sequence
doesn't make sense. They are output params.
This is to fix weird failure of libs_rtp on Windows
Code in g_return_*() must not have side effects, as it
might be compiled out if -DG_DISABLE_CHECKS is used, in
which case we would read garbage off the stack.
It breaks all the calculations. While it can make sense during
initialization, there's very little API that can be called with such
timecodes without ending up with wrong results.
The old API would only assert or return an invalid timecode, the new API
returns a boolean or NULL. We can't change the existing API
unfortunately but can at least deprecate it.
audioconvert's passthrough status can no longer be determined
strictly from input / output caps equality, as a mix-matrix can
now be specified.
We now call gst_base_transform_set_passthrough dynamically, based
on the return from the new gst_audio_converter_is_passthrough()
API, which takes the mix matrix into account.
The presence of `key-mgmt` attribute will set the mikey appropriately.
We therefore don't need to check the return value (which will
be overwritten afterwards).
CEA608_IN_CEA708_RAW is the same format as CEA708_RAW. It's only
difference is that it must contain only CEA608 and a format like this
does not exist in practice. In practice every element that handles raw
cc_data triplets must check each triplet for their actual content and
handle them accordingly.
For CC-only streams a parser could signal the existence of CEA608 and/or
CEA708 inside the caps but for metas this can only potentially be
signalled via the ALLOCATION query for negotiation purposes.
A separate format for this is not very useful and instead it should be a
format qualifier.
CEA608_S334_1A is the format defined by SMPTE S334-1 Annex A and which
is used for transferring CEA608 over SDI instead of CEA708 CDP packets.
According to RFC3611, the extended report blocks in XR packet can
have variable length. To visit each block, the iterator should look
into block header. Once XR type is extracted, users can parse the
detailed information by given functions.
Loss/Duplicate RLE
The Loss RLE and the Duplicate RLE have same format so
they can share parsers. For unit test, randomly generated
pseudo packet is used.
Packet Receipt Times
The packet receipt times report block has a list of receipt
times which are in [begin_seq, end_seq).
Receiver Reference Time paser for XR packet
The receiver reference time has ntptime which is 64 bit type.
DLRR
The DLRR report block consists of sub-blocks which has ssrc, last RR,
and delay since last RR. The number of sub-blocks should be calculated
from block length.
Statistics Summary
The Statistics Summary report block provides fixed length
information.
VoIP Metrics
VoIP Metrics consists of several metrics even though they are in
a report block. Data retrieving functions are added per metrics.
https://bugzilla.gnome.org/show_bug.cgi?id=789822
Non-direct dmabuf uploads, just as direct dmabuf uploads, create EGL
images and thus GL textures of the same width as the imported image.
The input dmabuf line stride is not relevant to the resulting texture
in both cases.
This fixes the case where non-direct uploads of input dmabufs with line
stride larger than the width will for example cause glcolorconvert to
sample only the left part (width * bytes per pixel / stride) of the
image, causing a horizontally stretched and cropped output image.
Pull in video frame fields into local variables. Without this the
compiler must assume that they could've changed on every use and read
them from memory again.
This reduces the inner loop from 6 memory reads per pixels to 4, and the
number of writes stays at 3.
gst_rtsp_connection_send() adds the Authorization header to the request.
If this function is being called multiple times with the same request
it will add one more Authorization header every time.
To fix to this issue do not append a new Authorization header on
top of an existing ones. Remove any existing Authorization headers first
and then add the new one.
Fixes gst-plugins-good#425
If multiple DRM connectors are connected, currently the first one is
picked. Improve this by adding an environment variable that allows for
choosing a connector by name. The connector name has been made so they
are compatible with modetest/modeprint DRM utilities.
Related to #490
* List all connectors, modes, and encoders, even after picking one
* Add missing DRM_MODE_CONNECTOR_DPI string for logging and improve
existing strings
* Make sure the names matches modetest/modeprint from DRM utilities
Related to #490
If we use the main loop it might happen that the caller (e.g. our unit
test) already shut down the loop once the result was received and in
that case the pipeline would never ever be shut down (and our unit test
would hang).
This adds a few missing gst_object_unref calls for the opengl context in
gstglwindow_gbm_egl.c, as well as the missing close call for the
drm node fd in gst_gl_display_gbm_finalize.
While this creates a circular reference between the pipeline and the
context, this ensures that the context stays alive for as long as any
callbacks could be called on it. The circular reference is broken once
the conversion is finished (or error, or timeout), which will then cause
everything to be freed.
Previously it was possible that a callback could be called on the
context right after it was freed already.
Also use only a single context structure, the second structure does not
simplify anything and duplicates storage.
This patch adds API in the audio decoder base class for setting the arbitrary
caps on the source pad. Previously only caps converted from audio info were
possible. This is particularly useful when subclass wants to set caps features
for audio decoder producing metadata.
The Y210 format was added in the middle of the formats enum and list,
introducing an ABI break.
This issue was detected thanks to the gstreamer-rs test harness.
We assume here the same data format for the user data as for the
DID/SDID: 10 bits with parity in the upper 2 bits. In theory some
standards could define this differently and even have full 10 bits of
user data but there does not seem to be a single such standard after
all these years.
Currently in Python it would become a signed 64 bit value but should
actually be an unsigned 32 bit value with all bits set.
This is the same problem as with GST_MESSAGE_TYPE_ANY.
See https://bugzilla.gnome.org/show_bug.cgi?id=732633
Rather then letting gst_gl_memory_setup_buffer guess the GL format used
for an eglimage after importing a dmabuf be explicit about it. This
fixes issues where dmabuf import may have used another format then
gst_gl_format_from_video_info would guess on the basis of the available
GL extensions.
In particular on etnaviv the gst_gl_format_from_video_info would
assuming a luminance + alpha GL format is used for YUY2, but the dmabuf
import will always use RG88. Which causes images to end up somewhat pink when
displayed on the screen.