Instead of a sequence of if statements, declare a table to map profile
idc with profiles and traverse it.
Also, first add the profile from the parsed profile idc and later add,
into the profile array, the profile from the compatibility flags.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1440>
It's possible a HEVC stream to have multiple profiles given the
compatibility bits. Instead of returning a single profile, internal
gst_h265_profile_tier_level_get_profiles() returns an array with all
it possible profiles.
Profiles are appended into the array only if the generated profile
is not invalid.
gst_h265_profile_tier_level_get_profile() is rewritten in terms of
gst_h265_profile_tier_level_get_profiles(), returning the first
profile found the array.
And gst_h265_get_profile_from_sps() is also rewritten in terms of
gst_h265_profile_tier_level_get_profiles(), but traversing the array
verifying if the proposed profile is actually valid by Annex A.3.x of
the specification.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1440>
BT.2020 color primaries are designed to cover much wider range of
CIE chromaticity than BT.709, and also it's used for both SDR and HDR
contents. So, the incorrect assumption (i.e., BT.709 as a BT.2020)
is risky and resulting image color tends to be visually very wrong.
Unless there's obvious clue, don't consider color space of high resolution
video stream as BT.2020
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1445>
* Add fec / red encoders as direct children of webrtcbin, instead
of providing them to rtpbin through the request-fec-encoder signal.
That is because they need to be placed before the rtpfunnel, which
is placed upstream of rtpbin.
* Update configuration of red decoders to set a list of RED payloads
on them, instead of setting the pt property.
That is because there may be one RED pt per media in the same session.
* Connect to request-fec-decoder-full instead of request-fec-decoder,
in order to instantiate FEC decoders according to the payload type
of the stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1429>
When multiple streams are bundled together, there may be more
than one red payload type to handle.
In addition, as the red decoder works by filling in gaps in
the seqnums, there needs to be one rtp_history queue per sequence
domain.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1429>
In redenc, when input buffers have a header for the TWCC extension,
we now add one to our wrapper buffers.
In ulpfecenc we add one in that case to our protection buffers.
This makes TWCC functional when UlpRed is used in webrtcbin.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1414>
In order to allow "level-asymmetry-allowed" we now handle a new
"profile" field, which as the same semantics as the "profile" field in
H.264 stream so that we can force payloaded stream to have the right
format when using the `gst_sdp_media_get_caps_from_media` to set caps
filter after the payloader. This allows a simple negotiation in standard
RTP negotiation based on SDPs (like webrtc) for that particular case,
closely respecting the specs.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1410>
The ["level-asymmetry-allowed"] field states that the peer wants the
profile specified in the "profile-level-id" fields but doesn't care
about the level. To express this in GStreamer caps term, we add a
"profile" field in the caps, which reuses the usual "profile" semantics
for H.264 streams and, and remove "profile-level-id" and
"level-asymmetry-allowed" fields.
["level-asymmetry-allowed"]: https://www.iana.org/assignments/media-types/video/H264
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1410>
We are querying supported swapchain colorspace via
CheckColorSpaceSupport() but it doesn't seem to be reliable.
Use only tested full-range RGB formats which are:
- sRGB
- BT709 primaries with linear RGB
- BT2020 primaries with PQ gamma
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1433>
When using playbin3, it seems that the alpha decode is always first to
push caps and run an allocation query. As the format change from sink
and alpha were not synchronized, the allocation query could endup
being run before the caps are pushed. That may lead to failing query,
which makes the decoder thinks there is no GstVideoMeta downstream and
most likely CPU copy the frame.
This patch implements a format cookie to track and synchronize the
format changes on both pads fixing the racy performance issue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1439>
This adds the alignment field to the template caps. Without this field
set, the auto-plugger will see fixed caps and will use
gst_caps_is_subset() against the caps produced by the parser. This is a
challenge for all cases where a parser can do conversion. This is fixed
by adding alignment field, which makes the auto-pluggers do an
intersection of the caps as it gets unfixed caps after intersection now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1439>
XNextEvent() blocks indefinitely in absence of X11 events, which can
prevent the pipeline from stopping.
This can cause problems when ximagesrc is used in "remote desktop"
scenarios and the GStreamer application itself, through which the user
is viewing and controlling the machine, is the only source of input
events.
Replace the call with non-blocking XCheckTypedEvent().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1438>
since `gst_caps_replace()` and `gst_pad_set_caps()` both ref the caps and neither of them takes the ownership of the caps -> it must be unreffed in `gst_multi_file_src_set_property()`
to test the leak (on Unix): `echo coucou > /tmp/file.txt && GST_TRACERS=leaks GST_DEBUG="GST_TRACER:7" gst-launch-1.0 multifilesrc location=/tmp/file.txt caps='txt' ! fakesink`
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1436>
When detecting the remote time has been reset which may occur if remote
device providing the clock server has been power reset, then clock is
no longer synced. Setting clock state will trigger a signal to client
informing on sync lost making it possibility to take appropriate action.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/975>
Due to a copy paste bug, the bitdepth was never set and that was leading
to requesting sizeimage of 0. Previously that worked since the driver
would in that case pick a size for us. But now the we bumped the minimum
to 4KB, the driver happily allocate 4KB of bitstream which lead to
decoding error.
As MPEG2 have a fixed bitdeph of 8, use a define instead of the run-time
variable.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1415>
vaapidecode is used in vaapidecodebin and it exposes all the
theoretically supported caps, but that slows down autoplug. With this
autplug is negotiated faster, giving more option to decodebin to select
other decoder.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1405>
The V4L2 uAPI uses pic_num for both PicNum and ShortTermPicNum. It also
doe the same for both FrameNum and LongTermFrameIdx. This change does
not change the fluster score, but fixed a visual corruption noticed
with some third party streams.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1387>
The current code does not set the copied memory correctly when it is popped
from the surface cache pool.
1. We forget to ref the allocator, which causes the allocator to be freed
unexpected, and we get a crash later because of the memory violation.
2. We forget to add ref_mems_count, which causes the surface leak because
the surface can not be pushed back to the cache pool again.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1373>
Set minimum sizeimage such that there is enough space for any overhead
introduced by the codec.
Notably fix a vp9 issue in which a small image would not have a
bitstream buffer large enough to accomodate it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1012>
Rework gstvp9{decoder|statefulparser} to optionally parse compressed headers.
The information in these headers might be needed for accelerators
downstream, so optionally parse them if downstream requests it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1012>
Also ignore 0x0 sizes in the fallback case and assume the size can be
anything between 1x1 and MAXxMAX.
This fixes the case where a width=0, height=0 caps are created. Whith
this patch the caps will contain width=[1,MAX], height=[1,MAX].
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1396>
When an extmap is defined twice for the same ID, firefox complains and
errors out (chrome is smart enough to accept strict duplicates).
To work around this, we deduplicate extmap attributes, and also error
out when a different extmap is defined for the same ID.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1383>
There are a lot of info in the mpeg2's sequence(also including ext
display_ext and scalable_ext). We need to notify the subclass about
its change, but not all the changes should trigger a drain(), which
may change the output picture order. For example, the matrix changes
in sequence header does not change the decoder context and so no need
to trigger a drain().
Fixes: #899
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1375>
Follow-up on 97d83056b3, only check
for intersection with the current srccaps when checking if a sinkpad
can accept caps.
I must have been lucky in my firefox testing then, and always entered
the code path with audio getting negotiated first, thus not failing
the is_subset check when srccaps had been negotiated as
application/x-rtp, and an accept-caps query was made for the video
caps with a defined extmap.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1384>
This is a requirement for GstPlayer when using the default overlay interface
provided by the pipeline. The GstPlayerWrappedVideoRenderer requires a valid
pipeline, but that's available only after the GstPlay thread has successfully
started.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1345>
Dynamically registered elements (hardware element in most cases)
may or may not be available on a system and properties may be different
per system.
This new API will make documentation skipping possible in programmable way.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1360>
ShowWindow() could be blocked while doing gst_d3d11_window_win32_unprepare
when external window handle provided to d3d11videosink in multi-threaded
environment.
The condition that issue happened is, UI thread is waiting for a
background thread that changes d3d11videosink state to NULL, and the
background thread would try to send a window message to the queue.
The queue is already occupied by the UI thread, so the background
thread will be blocked.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1366>
Instead of defining a sized array for function signature, use it
unsized (a pointer alias, basically). In this way clang warning is
silenced:
warning: ‘fill_profiles’ accessing 64 bytes in a region of size 12 [-Wstringop-overflow=]
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1357>
If qt asks us to redraw before we have both set a buffer and caps we
would attempt to use the new caps with the old buffer which could result
in bad things happening.
Only update caps from new_caps once the buffer has actually been set.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1346>
Make sure the EGLImage we're rendering to the GL memory stays alive long enough,
until the the GL memory has been destroyed.
This change fixes tearing and black flashes artefacts that were happening
because the EGLImage was sometimes destroyed before the sink actually rendered
the associated texture.
Fixes#889
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1354>
This takes a plain message string and not a format string, and as a
result doesn't have to be passed through vasprintf() and lead to further
unnecessary allocations. It can also contain literal `%` because of
that.
The new function is mostly useful for bindings that would have to pass a
full string to GStreamer anyway and would do formatting themselves with
language-specific functionality.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1356>
There's a potential race condition with this sort of pipelines on
certain systems (depends on the processing load):
GST_DEBUG_DUMP_DOT_DIR=/tmp \
gst-launch-1.0 uridecodebin3 uri=file://stream.mp4 ! glupload ! \
glimagesink --gst-debug=*:4
Right after the pipeline passes from PAUSED to READY, bin_to_dot_file
dumps uridecodebin3 properties, but current uri and suburi might be
already freed, causing a potential use-after-freed.
This patch makes NULL the current item right after all the play items
are freed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1353>
A previous patch has caused rtpfunnel to output twcc-related
information downstream, however this leaked into upstream
negotiation (through funnel->srccaps), causing payloader to
negotiate twcc caps even when not prompted to do so by the user.
Fix this by only enforcing that upstream sends us application/x-rtp
caps as was the case originally.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1278>
Negative composition time offsets are only allowed with version 1 of the
box, however we parse it as a signed value also for version 0 boxes as
unfortunately there are such files out there and it's unlikely to have
(valid) huge composition offsets.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1294>
Sometimes the resampler has enough space to store all the incoming
samples without outputting anything. When this happens,
gst_audio_resampler_get_out_frames() returns 0.
In that case, the resampler should consume samples and just return.
Otherwise, we get a segfault when gst_audio_resampler_resample() tries
to resample into a NULL 'out' pointer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1343>
g_sequence_remove_range's end iter is exclusive, so if one
wants to remove that item as well, it should be called with
the next iter.
This could in theory fix an issue where:
* The sequence isn't entirely trimmed, with an old item lingering
* Following FEC packets are immediately discarded because they
arrived later than corresponding media packets, long enough for
seqnums to wrap around
* We now try to reconstruct a media packet with a completely obsolete
FEC packet, chaos ensues.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1341>
We need to hold onto the last buffer until the next buffer arrives.
Before, if a caps change comes we would remove the currently rendering
buffer. if Qt asks use to render something, we would render the dummy
black texture.
Fixes a period of black output when upstream is e.g. changing resolution
as in hls adaptive bitrate scenarios.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1338>
The `gst_v4l2_buffer_pool_dqbuf` function contains this ominous comment:
/* get our GstBuffer with that index from the pool, if the buffer was
* outstanding we have a serious problem.
*/
outbuf = pool->buffers[group->buffer.index];
Unfortunately it is common for buffers in _output_ buffer pools to be
both queued and outstanding at the same time. This can happen if the
upstream element keeps a reference to the buffer, or in an encoder
element itself when it keeps a reference to the input buffer for each
frame.
Since the current code doesn't handle this case properly we can end up
with crashes in other elements such as:
(gst-launch-1.0:32559): CRITICAL **: 17:33:35.740: gst_video_frame_map_id: assertion 'GST_IS_BUFFER (buffer)' failed
and:
(gst-launch-1.0:231): GStreamer-CRITICAL **: 00:16:20.882: write map requested on non-writable buffer
Both these crashes are caused by a race condition related to releasing
the same buffer twice from two different threads. If a buffer is queued
and outstanding this situation is possible:
**Thread 1**
- Calls `gst_buffer_unref` decrementing the reference count to zero.
- The core GstBufferPool object marks the buffer non-outstanding.
- Calls the V4L2 release buffer function.
- If the buffer is _not_ queued:
- Release it back to the free pool (containing non-queued buffers).
**Thread 2**
- Dequeues the queued output buffer.
- Marks the buffer as not queued.
- If the buffer is _not_ outstanding:
- Calls the V4L2 release buffer function.
- Release it back to the free pool (containing non-queued buffers).
If both of these threads run at exactly the same time there is a small
window where the buffer is marked both not outstanding and not queued
but before it has been released. In this case the buffer will be freed
twice causing the above crashes.
Unfortunately the variable recording whether a buffer is outstanding is
part of the core `GstBuffer` object and is managed by `GstBufferPool` so
it's not as straightforward as adding a mutex. Instead we can fix this
by additionally recording the buffer state in `GstV4l2BufferPool`, and
handle "internal" and "external" buffer release separately so we can
detect when a buffer becomes not outstanding.
In the new solution:
- The "external" buffer pool release and the "dqbuf" functions
atomically update the buffer state and determine if a buffer is still
queued or outstanding.
- Subsequent code and a new
`gst_v4l2_buffer_pool_complete_release_buffer` function can proceed to
release (or not) a buffer knowing that it's not racing with another
thread.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1010>
The pipeline flow for receiving looks like this:
rtpsession ! rtpssrcdemux ! session_fec_decoder ! rtpjitterbuffer ! \
rtpptdemux ! stream_fec_decoder ! ...
There are two places where a fec decoder could be placed.
1. As requested from the 'request-fec-decoder' signal: after rtpptdemux
for each ssrc/pt produced
2. after rtpssrcdemux but before rtpjitterbuffer: added for the
rtpst2022-1-fecenc/dec elements,
However, there was some cross-contamination of the elements involved and
the request-fec-decoder signal was also being used to request the fec
decoder for the session_fec_decoder which would then be cached and
re-used for subsequent fec decoder requests. This would cause the same
element to be attempted to be linked to multiple elements in different
places in the pipeline. This would fail and cause all kinds of havoc
usually resulting in a not-linked error being returned upstream and an
error message being posted by the source.
Fix by not using the request-fec-decoder signal for requesting the
session_fec_decoder and instead solely rely on the added properties for
that case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1300>
The average_period should always represent the time between two
events. The specification defines the event time as the time
between audio samples, video frame sync, video line sync, etc.
In case of one timestamp per PDU the timestamp_interval identifies
the amount of events between the timestamp of one PDU and the
timestamp of the next PDU.
As described in IEEE 1722-2016 chapter
"10.4.12 timestamp_interval field" timestamp_interval shall be
nonzero.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1076>
The failure conditions can be overidden by subclasses, and a boolean
return value is provided to the caller whether adding/removing the child
element has actually worked. The caller can then handle this
accordingly but flooding stderr with this is not very useful.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1320>
Upstream caps might for example be
application/x-rtp,media=audio,encoding-name={OPUS, X-GST-OPUS-DRAFT-SPITTKA-00, multiopus}
and while that is not fixed caps it is enough to match it with a media.
Only caps structures that have the correct structure name and that have
the media and encoding-name field are preserved, but if both are present
then these caps are used as "codec preferences".
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1291>
Currently reading extension relies on the fact that everything after the
last"." character is a file extension. Whereas that works fine for most
of the cases, it breaks when the URI contains a query part.
E.g.: `http://url.com/file.mp4?param=value` returns `mp4?param=value`
instead of `mp4`.
In this commit we use URI parser to read the path of the URI (in the example
above, that is `/file.mp4`) and read extension from that path.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1305>
Some decoding APIs support delayed output for performance reasons.
One example would be to request decoding for multiple frames and
then query for the oldest frame in the output queue.
This also increases throughput for transcoding and improves seek
performance when supported by the underlying backend.
Introduce support in the mpeg2 base class, so that backends that
support render delays can actually implement it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1013>
Downstream might need the start code offset when decoding.
Previously this computation would be scattered in multiple sites. This
is error prone, so move it to the base class. Subclasses can access
slice->sc_offset directly without computing the address themselves
knowing that the size will also take the start code into account.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1013>
The GstV4l2CodecAllocator dispose function clears `self->decoder` but
the finalize function then tries to use it if the allocator has no been
detached yet.
Fix by detaching in the dispose function before we clear
`self->decoder`.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1220>
Standard interlace handling:
* If we have interlace-mode=interleaved and the field order, we just
set it when creating the session
* If we have interlace-mode=(interleaved|mixed) and no field order, we
set the field order on the first buffer
The encoder session does not support changing the FieldDetail after it
has started encoding frames, so we cannot support mixed streams
correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1214>