WHen bundling, if multiple medias are used with the same media payload, then
each of the fec/rtx/red additions would add a distinct payload. This could
very easily overflow the available payload space.
Instead, track the relationship between the media payload value and
the relevant fec/rtx/red payload values and reuse them whenever
necessary, even when bundling.
e.g.
...
a=group:BUNDLE video0 video1
m=video 9 UDP/SAVPF 96 97
a=mid:video0
a=rtpmap:96 VP8/90000
a=rtpmap:97 rtx/90000
a=fmtp:97 apt=96
...
m=video 9 UDP/SAVPF 96 97
a=mid:video1
a=rtpmap:96 VP8/90000
a=rtpmap:97 rtx/90000
a=fmtp:97 apt=96
...
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2474>
* Enhance debug log to print human readable D3D11_FORMAT_SUPPORT flags
value, instead of packed numeric flagset value.
* Only device supported format will be added to format table.
Depending on device feature level (i.e., D3D9 feature devices),
16bits formats will not be supported. Although there might be formats
we deinfed but not supported, it will not be a major issue in practice
since our D3D11 implementation does not support legacy devices already
(known limitation) and also old d3dvideosink will be promoted in that case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2441>
Output may attemp to set the width and height to zero values if
caps has no such information, which will cause capture get invalid
dimensions. Then decoder reports negotiation failure.
So need to set default resolution if caps has no such information.
Real values can be set again until source change event is signaled.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2400>
Now it uses the JPEG parser in libgstcodecparsers, while the whole
code is simplified by relying more in baseparser class for tag
handling.
The element now signals chroma-format and default framerate is 0/1,
which is for still-images.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1473>
While current and future LoongArch machines that are supposed to run
GStreamer all support unaligned accesses, there might be future
lower-end cores (e.g. the embedded product line) without such support,
and we may not want to penalize these use cases.
So, mark LoongArch as not supporting unaligned accesses for now, and
hope the compilers do a good job optimizing them. We can always flip
switch later.
Suggested-by: CHEN Tao <redeast_cn@outlook.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2443>
It is valid to have the padding set to 1 on the first packet and it
happens very often from TWCC packets coming from libwebrtc. This means
that we were totally ignoring many TWCC packets.
Fix test that checked that a first packet with padding was not valid and
instead test a single twcc packet with padding to check precisely what
this patch was about.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2422>
When processing the first event after probing the
file and being activated, requeue sticky events
as there's no requirement that demuxers send tag
and other events again after a seek - that's
why they're sticky.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2432>
- Consistently unref the chained buffer at the end of the chain
function, if we're not handing it off to `gst_pad_push`. This avoids a
few buffer leaks in the error paths in `_chain` and `_push_history`.
- When mapping the video frame fails, return a flow error instead of
crashing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2428>
If we just break the loop, we might run into the `gop != NULL` assert
that follows it. Rather, exit immediately with flushing flow.
Also use this flushing mechanism when we release a pad. This avoids
having an extra flag.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1030>
Background:
Whenever a caps event is received by appsink, the caps are stored in the
same internal queue as buffers. Only when enough buffers have been
popped from the queue to reach the caps, `priv->sample` gets its caps
updated to match, so that they are correct for the following buffers.
Note that as far as upstream elements are concerned, the caps of appsink
are updated immediately when the CAPS event is sent. Samples pulled from
appsink retain the old caps until a later buffer -- one that was sent by
upstream elements after the new caps -- is pulled.
The race condition:
When a flush is received, appsink clears the entire internal queue. The
caps of `priv->sample` are not updated as part of this process, and
instead remain as those of the sample that was last pulled by the user.
This leaves open a race condition where:
1. Upstream sends a new caps event, and possibly some buffers for the
new caps.
2. Upstream sends a flush (possibly from a different thread).
3. Upstream sends a new buffer for the new caps. Since as far as
upstream is concerned, appsink caps are the new caps already, no new
CAPS event is sent.
4. The appsink user pulls a sample, having not pulled before enough
samples to reach the buffers sent in step 1.
Bug: the pulled sample has the old caps instead of the new caps.
Fixing the race condition:
To avoid this problem, when a buffer is received after a flush,
`priv->sample`'s caps should be updated with the current caps before the
buffer is added to the internal queue.
Interestingly, before this patch, appsink already had code for this, in
gst_app_sink_render_common():
/* queue holding caps event might have been FLUSHed,
* but caps state still present in pad caps */
if (G_UNLIKELY (!priv->last_caps &&
gst_pad_has_current_caps (GST_BASE_SINK_PAD (psink)))) {
priv->last_caps = gst_pad_get_current_caps (GST_BASE_SINK_PAD (psink));
gst_sample_set_caps (priv->sample, priv->last_caps);
GST_DEBUG_OBJECT (appsink, "activating pad caps %" GST_PTR_FORMAT,
priv->last_caps);
}
This code assumes `priv->last_caps` is reset when a flush is received,
which makes sense, but unfortunately, there was no code in the flush
code path resetting it.
This patch adds such code, therefore fixing the race condition. A unit
test demonstrating the bug and testing its behavior with the fix has
also been added.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2413>
This transition is meant to be very similar to crossfade, but
instead of fading out the background video at the same time as the
foreground fades in, the background video stays at 100% opacity
during the whole transition.
This essentially "restores" the old crossfade behaviour that was changed in:
eb48faf342
but using a new type enum, so that both behaviours are available,
letting applications choose.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2385>
The timestamp in the tfdt refers to the first trun box and if there are
multiple trun boxes then the distance between the first timestamps will
grow.
At some point this distance reaches a threshold and triggers the
resetting of the first sample's timestamp of this trun box to be reset
to the tfdt.
This threshold is implemented for files where there is a jump in the
timeline between fragments and where this can be detected via a jump
between the end timestamp of the previous fragment and the tfdt of the
next. This behaviour is preserved.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2409>
NVIDIA GPUs have undocumented limitation regarding minimum resolution
and it can be queried via a NVDEC API. However, since we don't want to
bring CUDA/NVDEC API into D3D11, use hardcoded values for now
until we find a nice way for capability check.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2406>
Various elements are assuming that the pointer matches a pad template
they know about, and also randomly created pad templates might be
missing some important information that is necessary to create a valid
pad.
For example, creating a new pad template for audiomixer's sinkpad
without providing the correct GType would cause audiomixer to create a
GstAggregatorPad. That will then later fail spectacularly because it
assumes that it got a GstAudioAggregatorPad.
Passing a pad template that does not belong to the element class in here
will easily lead to undefined behaviour.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2410>
If for some reason the encoder produces frames with a pts higher than
the input one, we were dropping all the video encoder frames and ended
up crashing when trying to access the pts of a NULL pointer returned by
gst_video_encoder_get_oldest_frame().
I hit this scenario by feeding a decreasing timestamp to vp8enc which
seem to confuse the encoder.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2405>
Until March 2022, the FFmpeg MXF muxer would write the various index table
segments with the same instance ID, which should only be used if it is a
duplicate/repeated table.
In order to cope with those, we first compare the other index table segment
properties (body/index SID, start position) before comparing the instance
ID. This will ensure that we don't consider them as duplicate, but can still
detect "real" duplicates (which would have the same other properties).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2407>
If the stream chroma doesn't match with any video format in the source
caps template (generated from va config surface formats) instead of
return unknown, return the first available format in the template,
assuming that the driver would be capable to do color conversions.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2404>
Use newly added gst_h265_parser_identify_and_split_nalu_hevc()
method to handle broken streams where packetized NAL unit
contain start code prefix in it.
It's obviously wrong stream but we know how to work around it
and even need to support such broken streams since
stateless decoder implementations are being a primary
decoder element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2394>
Add gst_h265_parser_identify_and_split_nalu_hevc() method to
handle a case where packetized stream contains start-code prefix.
This new method behaves similar to exisiting gst_h265_parser_identify_nalu_hevc()
but it will scan start-code prefix to split given data into
NAL units.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2394>
Instead of using a hard-coded list of preferred formats according the
chroma type, now if now caps are pre-negotiated, from template caps
will choose the first format with the same chroma type. If
pre-negotiated, then it will choose the first format, with same chroma
type, from the first caps structure.
Also all the decoders will check if GST_VIDEO_FORMAT_UNKNOWN is
returned, failing the negotiation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2351>
Hantro H1 and Rockchip VEPU2 drivers will pad the width/height to a
multiple of 16. In order to obtain the right JPEG size, the image needs
to be cropped using the S_SELECTION API. This support is added as best
effort since older drivers may emulate this by looking at the capture
queue width/height.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2329>
gst_value_serialize() does more than what's needed to printf-ing
especially when given GValue is already string. Just print string
value as-is without gst_value_serialize() to avoid unreadable
string print, especially for multi-bytes character encoding cases.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2387>
V4L spec now requires decode_params flags to be set in accordance to the
frame's type. In particular this is required by H.264 decoder of NVIDIA
Tegra SoC to operate properly. Set the flags based on type of parsed
slices.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1757>
* Remove fields no longer used, or that can be replaced by smaller code
* Rename "channels" to a more meaningful "input pads"
* Directly handle/use combiner pads in the combiners instead of on the playbin3
main structure
Remove the corresponding combiner sinkpad whenever a uridecodebin3 source pad
goes away
* If used, store the corresponding combiner sink pad in the SourcePad helper
structure
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2384>
GstD3D11ScreenCapture object is pipeline-independent global object
and the object can be shared by multiple src elements,
in order to overcome a limitation of DXGI Desktop Duplication API.
Note that the API allows only single capture session in a process for
a monitor.
Therefore GstD3D11ScreenCapture object must be able to handle a case
where a src element holds different GstD3D11Device object. Which can
happen when GstD3D11Device context is not shared by pipelines.
What's changed:
* Allocates capture texture with D3D11_RESOURCE_MISC_SHARED for the
texture to be able to copied into other device's texture
* Holds additional shader objects per src element and use it when drawing
mouse
Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1197
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2366>
mp4mux can't negotiate caps with upstream/downstream and always outputs
specific caps based on the input streams. This will always happen before
it produces the first buffers.
By having the default aggregator negotiation enabled the same caps
would be pushed twice in the beginning, and again every time a
reconfigure event is received.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
mxfmux can't negotiate caps with upstream/downstream and always outputs
specific caps based on the input streams. This will always happen before
it produces the first buffers.
By having the default aggregator negotiation enabled the same caps
would be pushed twice in the beginning, and again every time a
reconfigure event is received.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
mpegtsmux can't negotiate caps with upstream/downstream and always outputs
specific caps based on the input streams. This will always happen before
it produces the first buffers.
By having the default aggregator negotiation enabled the same caps
would be pushed twice in the beginning, and again every time a
reconfigure event is received.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
flvmux can't negotiate caps with upstream/downstream and always outputs
specific caps based on the input streams. This will always happen before
it produces the first buffers.
By having the default aggregator negotiation enabled the same caps
would be pushed twice in the beginning, and again every time a
reconfigure event is received.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
Otherwise setting the srcpad caps based on the sinkpad caps event will
already push a segment event downstream before the upstream segment is
known.
If the upstream segments are just forwarded when the upstream segment
event arrives this would result in two segment events being sent
downstream, of which the first one will usually be simply wrong.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
Baseclass calls get_preferred_output_delay() in a chain of
sequence header parsing and then new_sequence() is called
with required DPB size (includes render-delay) information.
Thus latency query should happen before the sequence header
parsing for subclass to report required render-delay accordingly
via get_preferred_output_delay() method.
(e.g., zero delay in case of live pipeline)
This commit is to fix wrong liveness signalling in case of
upstream packetized format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2363>
Baseclass calls get_preferred_output_delay() in a chain of
sequence header parsing and then new_sequence() is called
with required DPB size (includes render-delay) information.
Thus latency query should happen before the sequence header
parsing for subclass to report required render-delay accordingly
via get_preferred_output_delay() method.
(e.g., zero delay in case of live pipeline)
This commit is to fix wrong liveness signalling in case of
upstream packetized format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2364>
In the case where not all streams have received any data, growing the interleave
by only 100ms is too restrictive and would cause some (valid) mpeg-ts streams to
hang.
Bump up the interleave growth rate for those use-cases to 500ms per input (still
up to the limit of 5s).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2370>
If there weren't any moved/dirty regions in the captured frame, the
viewport of the ID3D11DeviceContext would be left at whatever previous
value it had, which could lead to the cursor being drawn in a wrong
position and/or in an incorrect size.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2362>
Make all codecs consistent so that subclass can know additional DPB
size requirement depending on render-delay configuration regardless
of codec. Note that render-delay feature is not implemented for AV1
yet but it's planned.
Also, consider new_sequence() is mandatory requirement, not optional
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2343>
As implemented, we only support OpenGL 3 API from version 3.2. Though, there
is no issue enabling GLSL 1.30 even if we are going to restrict our API usage
to 2. This allows using texelFetch() on OpenGL 3.0 and 3.1 drivers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2190>
Since the addition of tiling format with subsampled tile size
(NV12_16L32S), getting the tile width/height shifts and tile
size have become more complex. Add a helper to extract and
scale this information for the selected plane and format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2190>
Since both g_value_set_object() and g_weak_ref_get() takes a reference
there will be two new references to the GstWebRTCICE object when there
should be only one. g_value_take_object() has the same functionality as
g_value_set_object() but does not take a reference.
Without this change, the GstWebRTCICE object will be leaked.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2333>
Mixing C loops with switch statements is a bad idea as break has a
different meaning in both. Breaking inside the switch statements wrongly
caused further loop iterations.
Instead use goto to get out of the loop and continue to do another loop
iteration, and never ever use break except for the end of a case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2336>
Some streams have 2 PMT sections in a single TS packet. The first one is "valid"
but doesn't contain/define any streams. That causes an unrecoverable issue when
we try to activate the 2nd (valid) PMT.
Instead of doing that, pre-emptively refuse to process PMT without any streams
present within. We still do post that section on the bus to inform applications.
Fixes#1181
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2310>
In push mode (streaming), if the received chunk buffer size from _chain is bigger
than output buffer size, the flags of the divided-buffers are propagated to the
DISCONT flag from first received chunk buffer. This unexpected buffers contained DISCONT
flags are abnormally transformed when changing the sampling rate by audioresample element.
So unset unnecessary DISCONT flag before pad_push().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2305>
Previously, we only added it when actually performing synchronization
based on the NTP time.
The information can be useful downstream in other situations too, and
we can compute a NTP time as soon as we get a sender report with the
relevant information.
Co-authored-by: Mathieu Duponchelle <mathieu@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2252>
The decision to store the input buffer depends on whether extensions
are to be added to the output buffer, I assume as an optimization.
This creates an issue for subclasses that call negotiate(), where
header_exts is actually populated, from their handle_buffer()
implementation: at chain time, no header extension has been negotiated
yet, which means that we don't add extensions to the first batch of
buffers that comes out.
Keep track of whether negotiate has been called (this is different
from the negotiated field) and always store the input buffer until
then. This fixes the issue while largely preserving the optimization.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2304>
The previous iteration of the code was inferring the type of the
frame by looking at the overall size of the gst-payloaded packet.
It is more robust to actually parse the payload and look at the
actual data buffers it contains.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2303>
regardless of whether they are input as individual buffers or
buffer lists.
The ONVIF specification requires all packets to hold the extension,
it makes no sense to behave differently when handling buffer lists.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2303>
Pipeline such as:
gst-launch-1.0 -vf videotestsrc ! video/x-raw,format=NV12,colorimetry=\(string\)bt709 \
! videoscale ! video/x-raw,format=I420 ! fakesink
Always trigger a error:
ERROR video-info video-info.c:556:gst_video_info_from_caps: no width property given
Because it is called before the fixate_size(), the src caps' resolution
may be absent or not fixed. That causes that the src video info can not
be created correctly and we can not inherit the colorimetry and chroma-site
from the input caps.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2289>
Fixing this pipeline:
gst-launch-1.0 filesrc location=sample.png ! pngdec ! videorate ! fakesink
- videorate receives a single buffer with pts = 0, duration = invalid;
- then it receives eos triggering this buffer to be pushed downstream;
- the pushing code was assuming that a duration was set, which is
impossible as we received a single buffer and no output framerate was
set either. So the best we can do is to push the buffer without
duration.
Fix#1177
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2296>
The va pool is used for GPU side surface/image, its alignment should
not be changed arbitrarily by others. So we decide not to expose the
GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT flag anymore.
Instead, user can call gst_buffer_pool_config_set_va_alignment() to
set its surface/image alignment.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2282>
According to spec:
color range equal to 0 shall be referred to as the studio swing
representation and color range equal to 1 shall be referred to as
the full swing representation.
The current status is just the opposite.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2288>
GAP events flagged with MISSING_DATA are transformed into GAP buffers
flagged with CORRUPTED.
In these cases, it is preferable to simply keep rendering the previous
buffer (if there was one) instead of flashing the pad in and out of
view.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/708>
When the GAP event was flagged with MISSING_DATA, subclasses
may want to adopt a different behaviour, for example by repeating
the last buffer.
As we turn these gap events into gap buffers, we need to flag
those, we do so with a new custom meta.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/708>
Returning TRUE from the `transform_meta` function tells
GstBaseTransform to copy the meta into the new buffer. If videoscale
has already transformed a meta by scaling it, it should always return
FALSE to avoid duplicating the meta.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1630>
Meson generates a gdbinit file that will automatically load gstreamer
script. However that script uses a helper python module that needs
PYTHONPATH to be pointing into the right location in the source
tree to be able to find gst_gdb.py.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1796>
When the sink is configured to create sockets with an explicit bind
address, then the created socket gets set to the udp_socket field
irregardless of whether the bind address indicated that the socket
family should be IPv4 or IPv6. When binding to an IPv6 address, this
results in the following error:
gstmultiudpsink.c:1285:gst_multiudpsink_configure_client:<rtcpsink>
error: Invalid address family (got 10)
This patch adds a check of the address family being bound to and sets
the created socket to used_socket or used_socket_v6, accordingly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1551>
RFC 8216 6.3.3 "Playing the Media Playlist File" : states that for live media
playlists "the client SHOULD NOT choose a segment that starts less than three
target durations from the end of the Playlist file"
This is an off-by-one error. Since we are looking for the "index" of the
segment, we need to subtract 1 from the searched position.
Ex: For a playlist with 12 entries, we want to start playback on the 9th segment
... which is at index 8.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2259>
When we fixup src caps, the current way of handling the HDR fields is not
correct.
1. We trim the HDR fields only when the input caps is not a subset of the
fixup src caps. But in fact, the input caps with HDR fields such as the
"mastering-display-info" can possibly be the subset of the fixup src caps,
if they have all same other fields.
2. We always copy the colorimetry from input caps to src caps if it is
absent. But when hdr-tone-mapping is enabled, the HDR->SDR conversion makes
the colorimetry change. We should use downstream's setting, or just use the
default colorimetry of SDR.
We changes to:
1. If hdr-tone-mapping is enabled, we trim all HDR fields and add a correct
colorimetry.
2. Copy colorimetry from input if it is still absent.
3. Consider the subset replacement.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2244>
Since d0133a2d11 "videoconvert: Allow
passthrough for ANY caps features" videoconvert will always claim that
it supports any kind of memory which is true in very specific case (when
it is running in passthrough mode). To get elements that autoplug
converters depending on the caps running in the pipeline (like
autovideoconvert), we need to have converters no lie about what they can
do when queried `accept_caps` or `query_caps`.
This still accepts any caps feature as before but it introduces
a restriction in the way we handle memory capsfeatures.
We keep previous behaviour in videoconvert and videoscale.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/898>
Now that videoconvert and videoscale's are both based on
GstVideoConverter and are using the exact same code, it makes much more
sense to have one element doing the two operation, and it can be
more efficient in some cases (one single path for both operations).
This removes the `videoscale` and `videoconvert` plugins but keeps the element
but makes them also do both operations (adding some APIs to each element).
There is a small change in API for the `videoscale:dither` property which
was previously a totally unused boolean, it is now an enum and is used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/898>
The RTCP SR packet might be without SDES in case of a reduced-size RTCP
packet. For syncing purposes the CNAME is needed but it might be known
already from an earlier RTCP packet or out of band, via the SDP for
example.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2132>
The format of the caps fields is
ssrc-(SSRC_VALUE)-(ATTRIBUTE_NAME)=(ATTRIBUTE_VALUE)
.
Parsing of the attributes from the caps into the SDP is not implemented
as this depends not only a single stream's caps but on the whole rtpbin
configuration.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2132>
Found via an analyzed build for Clang. Specifically we had:
gstav1parse.c[1850,11] in gst_av1_parse_detect_stream_format: Logic error: The left operand of '==' is a garbage value
gstav1parse.c[1606,11] in gst_av1_parse_handle_to_small_and_equal_align: Logic error: The left operand of '==' is a garbage value
Also a couple of false-positives:
gstav1parse.c[1398,24] in gst_av1_parse_handle_one_obu: Logic error: Branch condition evaluates to a garbage value
gstav1parse.c[1440,37] in gst_av1_parse_handle_one_obu: Logic error: The left operand of '-' is a garbage value
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2230>
GLib guarantees libintl is always present, using proxy-libintl as
last resort. There is no need to mock gettex API any more.
This fix static build on Windows because G_INTL_STATIC_COMPILATION must
be defined before including libintl.h, and glib does it for us as part
as including glib.h.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2028>
If a file includes a new version of a plugin that exits in the
registry, the output of gst-inspect is incorrect. The output has the
correct version but incorrect filename, and element description.
This seems to have also fixed some documentation issues.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1344>
Our decoder implementation does not use downstream d3d11 pool for
decoding because of special requirement of D3D11/DXVA. So preallocation
using the downstream buffer pool will waste GPU memory in most cases.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2211>
This provides new HLS, DASH and MSS adaptive demuxer elements as a single plugin.
These elements offer many improvements over the legacy elements. They will only
work within a streams-aware context (`urisourcebin`, `uridecodebin3`,
`decodebin3`, `playbin3`, ...).
Stream selection and buffering is handled internally, this allows them to
directly manage the elementary streams and stream selection.
Authors:
* Edward Hervey <edward@centricular.com>
* Jan Schmidt <jan@centricular.com>
* Piotrek Brzeziński <piotr@centricular.com>
* Tim-Philipp Müller <tim@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2117>
This reverts commit 652773de36 and
modifies it to rename the caps field name to coded-picture-structure.
It was previously removed because it confuses the decoder and we didn't
have a valid use case for including it in the encoded caps at this
stage. We now do have such a use case but still don't want to confuse
the decoder, so the field is renamed.
However, it is still not accurate without looking at the SEI picture
structure of each frame, so it was named coded-picture-structure. If its
value is "frame" it is most likely progressive, if it's "field" it is
most likely interlaced or mixed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2177>
get_merged_collection() returns an owned stream collection and was
leaked in the else block.
Fix leak when running:
GST_TRACERS=leaks GST_DEBUG="GST_TRACER:7,leaks:6" gst-play-1.0 --use-playbin3 test.mkv
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/954>
Make sure that the requested stream selection isn't identical to the current
one. If that's the case, just carry on as usual.
This avoids multiple `streams-selected` posting ... when the selection didn't
change.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2185>
* glimagesink is not a recommended one on Windows
* Remove directdrawsink section
* d3dvideosink is legacy and should not be recommended
* Add d3d11videosink part
* directsoundsink should be deprecated
* Add wasapisink/wasapi2sink part
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2144>
The current way names the level by the number of B frames it contains, the
less it contains, the higher level it is. So the non ref B frames are in the
lowest layer and the B frames in the highest level refer to I/P frames.
But the widely used way is just the opposite, the ref B frames are in the
lower level and non ref B frames are at the highest level.
The is just a terminology change, and does not have any effect for compression
result and quality.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2149>
It doesn't matter for measurement purposes whether receiving them takes
a while and various PTP servers are not prioritizing to send them,
causing them to be dropped unnecessarily and preventing proper
synchronization with such servers.
This is especially a problem if the RTTs in the network are very low
compared to the additional delay imposed by the server.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2161>
timeapi.h is missing in our MinGW toolchain. Include mmsystem.h
header instead, which defines struct and APIs in case of our MinGW
toolchain. Note that in case of native Windows10 SDK (MSVC build),
mmsystem.h will include timeapi.h
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2153>
In case of re-syncing (i.e. moving to another partition to avoid too much of an
interleave), there was previously no checks to figure out whether a given
partition was already fully handled (i.e. when coming across it again after a
previous resync).
In order to handle this at least for single-track partitions, check whether we
have reached the essence track duration, and if so skip the partition.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2150>
The essence track position should only be overriden if we sucesfully switched to
another position. In case of EOS we do not want to override it else we would
increase the track position *again* at the end of this function
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2150>