Unprepare method posts WM_GST_D3D11_DESTROY_INTERNAL_WINDOW
command to the window queue, and from that moment considers
internal_hwnd to be released, and so it sets it to null.
The problem is that it's possible that right at that moment
the window thread might be already processing some other
command, or just another command might be already in the queue.
On practice we met a crash when WM_PAINT got processed in between
(unprepare already finished and WM_GST_D3D11_DESTROY_INTERNAL_WINDOW
was not handled yet)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6187>
When the conversion is only caps feature from memory:VAMemory to system memory,
it's possible to optimize by doing a pseudo pass-through since the va-backed
buffers are the same for system memory buffers.
This change will also mitigates #2940
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6174>
If the allocation query received from downstream doesn't handle GstVideoMeta but
it requests memory:DMABuf caps feature, it's incomplete, so we rather reject the
negotiation.
Both in base decoder, base transform and compositor.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6155>
This is a simplification of the venerable
gst_va_base_dec_get_preferred_format_and_caps_features() function, which
predates since gstreamer-vaapi. It's used to select the format and the
capsfeature to use when setting the output state. It was complex and hard to
follow. This refactor simplifies a lot the algorithm.
The first thing to remove _downstream_has_video_meta() since, most of the time
it will be called before the caps negotiation, and allocation queries make sense
only after caps negotiation. It might work during renegotiation but, in that
case, caps feature change is uncommon. Better a simple and common approach.
Also, for performance, instead of dealing with caps features as strings, GQuarks
are used.
The refactor works like this:
1. If peer pad returns any caps, the returned caps feature is system memory and
looks for a proper format in the allowed caps.
2. The allowed caps are traversed at most 3 times: one per each valid caps
feature. First VAMemory, later DMABuf, and last system memory. The first to
match in allowed caps is picked, and the first format matching with the
chroma is picked too.
Notice that, right now, using playbin videoconvert never return any.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6154>
Some subtitle "decoders" had a wrong category of "Parser", which `parsebin`
relies on to identify elements which do not *decode* streams but *parse* them.
This would cause such subtitle decoders to be plugged in within parsebin,
preventing the original stream to be properly used by (more efficient)
downstream decoders or subtitle renderers.
Fixes#1757
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6153>
This inherits from the same rule as gst_buffer_add_meta
```
gst-mpegtspesmetadatameta.h:98: Warning: GstMpegts:
gst_buffer_add_mpegts_pes_metadata_meta: return value: Invalid non-constant
return of bare structure or union; register as boxed type or (skip)
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6146>
This reverts questionable commit 009bc15f33
which looks completely wrong.
The GstWasapi2RingBuffer:buffer_size variable is used to
calculate available buffer size we can write
(i.e., available size = buffer_size - padding_size).
But the commit makes the size to be exactly same as buffer period.
Then, it can confuse this element as if the endpoint buffer is full on
I/O event callback (if padding size is equal to buffer period)
but it's not true.
Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2870
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6132>
- Add the missing field parameter and put the output parameter at the
end.
- Use a switch to verify valid values instead of hard-to-follow range
checks.
- Don't consider bad values a programming error, just a regular failure.
- Set all data fields at the end so we can pass a pointer to an
uninitialized structure without GCC complaining.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5450>
The global semaphore was never closed/unlinked, causing permission
denied issue if the device is later used by another user. Properly
removing the semaphore when stopping the pipeline would still leave it
open in case of a crash.
With a GStreamer specific name, it was also not preventing other apps to access
the device concurrently.
Finally, if the system has multiple cards, the lock should be per card
and not global (to be confirmed).
Fixes: #3283.
Sponsored-by: Netflix Inc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6117>
MaxDpbSize specified in A.4.2 tells upper bound of decoded picture
buffer size but does not tell actual required size.
Use max_dec_pic_buffering value as a dpb size. Some backends
such as DXVA and NVDEC might require pre-allocated DPB buffer
and unnecessary large DPB size will result in waste of GPU memory.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6101>
In rtpbin we already systematically check for all property names
except latency, correct that.
In webrtcbin we need to check before trying to use the do-retransmission
property.
This is useful for the case where an element like identity gets passed
to rtpbin's request-jitterbuffer property, when the application wants
to use webrtcbin in an SFU situation, with no reordering and no added
latency
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6112>
According to recommendation from MS, IDXGIOutputDuplication::ReleaseFrame()
needs to be called just before IDXGIOutputDuplication::AcquireNextFrame()
for performance reasons, so that driver can accumulate dirty rects
and update texture at once. But it seems to cause choppy output.
Do release acquired frame immediately once processing done,
like d3d11 implementation does.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6092>
* Bump the rank of the musepack v7/v8 FFmpeg demuxers to SECONDARY
* Bump the rank of the musepack v7/v8 FFmpeg audio decoders to SECONDARY
* Demote the rank of the musepackdec element to MARGINAL
This is for two reasons:
* The musepack library is no longer maintained, whereas the FFmpeg
implementation can/will receive fixes
* The `musepackdec` implementation was a all-in-one "parsing and decoding" blob
which doesn't play nicely with decodebin3 and others
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3033
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6074>
Sends a gap event if nothing to output for a given input buffer.
For example, an input buffer might not contain any caption data
for downstream requested field, then we need to inform downstream
of the case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6073>
WebKit commit b12e7ed2ad3a ("[WPE] Upstream the new WPE platform API
https://bugs.webkit.org/show_bug.cgi?id=265286")[1] added a `WPEView` typedef
which clashes with our `WPEView` class.
Rename the `WPEView` class to `GstWPEThreadedView` to avoid the collision.
Also prefix the `WPEContextThread` class with `Gst` and rename the
source files to reflect the new class name and use lowercase while at it
for consistency
[1] b12e7ed2ad
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6065>
Previously, the path lock was held even while issuing caps queries to
other elements. This can lead to deadlocks in more complex pipelines.
Avoid this by reworking gst_switch_bin_get_allowed_caps() to acquire
references to switchbin paths and then releasing the path lock.
Subsequent operations in that function then act on the acquired
references, thus eliminating the need for holding the path lock for
the entirety of that function.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4632>
The caps query specifies _all_ caps that the element can handle, not just
caps from the current path element. If for example a switchbin has two
paths, with one having an element that handles video/x-h264, and another
path whose element handles video/x-raw, and the second path is the
current path, then the existing code would report only video/x-raw as
supported. Fix this by report all allowed caps, even if there is a
current path defined.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4632>
The rationale is that a passthrough path (= one with no element) behaves
as if the switchbin's sink- and srcpad were one. In particular, internal
caps queries (needed for computing the allowed caps) then go to the peers
instead to path elements. Rework gst_switch_bin_get_allowed_caps () for
a clear handling of NULL path elements and for proper dataflow passthrough
and caps & accept-caps query handling.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4632>
The drop probe was present in early switchbin versions to implement paths
that drop dataflow. However, this feature turned out to be too problematic
and thus was removed. Some bits remained though. This commit removes those
bits and clarifies that in the current switchbin version, a NULL path
element instead means passthrough.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4632>
If the current segment has a configured stop point, detect
when when pad timestamps proceed past that point and mark
them as EOS. Otherwise, tsdemux continues streaming
the whole input downstream (unless something downstream detects
and returns EOS for us)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6023>
Use string parsing instead of pointer arithmetic, which makes the code
easier to understand and less error-prone. This has no functional
changes, and is preparation for the next commit, which extends the
header parsing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5997>
Fence data could hold GstD3D12Device directly or indirectly.
Then if it's holding last refcount, the device object will
be released from the device object's internal thread,
and will try join self thread.
Delegates it to other global background thread to avoid
self thread joining.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6042>
The libwebp API doesn't match very well with the GstVideoEncoder
API, as it only delivers an unframed bitstream once all pictures
have been processed, which means we can only push a single buffer
manually on our srcpad on finish().
Supporting animated webp is still valuable, and the feature is
behind an opt-in property.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5994>
- gst_analytics_cls_mtd_get_length() return a gsize, this type dicated index
type for gst_analytics_cls_mtd_get_quark() and
gst_analytics_cls_mtd_get_level().
- Minor cleanup/improvement on index validation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6018>
videoconvertscale advertises `ANY` feature, but it supports it only
in passthrough. Our goal with autoconvert is to ensure that conversion
is possible with the elements that are being plugged so we avoid
plugging `videoconvertscale` if the memory type is not system memory.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/899>
Instead of letting all the elements that were added into the bin,
add them only when strictly needed and removed them when we stop using
them.
With previous refactoring we are keeping them in our own hashmap in
amy case so we can keep reusing the same elements as before.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/899>
We used to conside elements that were subclassses of another
element type as being the same (for example with videoconvertscale,
bother videoconvert and videoscale are subclasses of videoconvertscale
and that code was lost)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/899>
The output of VP9 and AV1 encoder is a little different from the H264
and H265 encoder, it may contain repeat frames and so the output frame
number may be more than the input. We need to call finish_subframe()
when some frame will be repeated later. So we need to extend the
current prepare_output() virtual function.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3015>