Until March 2022, the FFmpeg MXF muxer would write the various index table
segments with the same instance ID, which should only be used if it is a
duplicate/repeated table.
In order to cope with those, we first compare the other index table segment
properties (body/index SID, start position) before comparing the instance
ID. This will ensure that we don't consider them as duplicate, but can still
detect "real" duplicates (which would have the same other properties).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2407>
If the stream chroma doesn't match with any video format in the source
caps template (generated from va config surface formats) instead of
return unknown, return the first available format in the template,
assuming that the driver would be capable to do color conversions.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2404>
Use newly added gst_h265_parser_identify_and_split_nalu_hevc()
method to handle broken streams where packetized NAL unit
contain start code prefix in it.
It's obviously wrong stream but we know how to work around it
and even need to support such broken streams since
stateless decoder implementations are being a primary
decoder element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2394>
Add gst_h265_parser_identify_and_split_nalu_hevc() method to
handle a case where packetized stream contains start-code prefix.
This new method behaves similar to exisiting gst_h265_parser_identify_nalu_hevc()
but it will scan start-code prefix to split given data into
NAL units.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2394>
Instead of using a hard-coded list of preferred formats according the
chroma type, now if now caps are pre-negotiated, from template caps
will choose the first format with the same chroma type. If
pre-negotiated, then it will choose the first format, with same chroma
type, from the first caps structure.
Also all the decoders will check if GST_VIDEO_FORMAT_UNKNOWN is
returned, failing the negotiation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2351>
Hantro H1 and Rockchip VEPU2 drivers will pad the width/height to a
multiple of 16. In order to obtain the right JPEG size, the image needs
to be cropped using the S_SELECTION API. This support is added as best
effort since older drivers may emulate this by looking at the capture
queue width/height.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2329>
gst_value_serialize() does more than what's needed to printf-ing
especially when given GValue is already string. Just print string
value as-is without gst_value_serialize() to avoid unreadable
string print, especially for multi-bytes character encoding cases.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2387>
V4L spec now requires decode_params flags to be set in accordance to the
frame's type. In particular this is required by H.264 decoder of NVIDIA
Tegra SoC to operate properly. Set the flags based on type of parsed
slices.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1757>
* Remove fields no longer used, or that can be replaced by smaller code
* Rename "channels" to a more meaningful "input pads"
* Directly handle/use combiner pads in the combiners instead of on the playbin3
main structure
Remove the corresponding combiner sinkpad whenever a uridecodebin3 source pad
goes away
* If used, store the corresponding combiner sink pad in the SourcePad helper
structure
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2384>
GstD3D11ScreenCapture object is pipeline-independent global object
and the object can be shared by multiple src elements,
in order to overcome a limitation of DXGI Desktop Duplication API.
Note that the API allows only single capture session in a process for
a monitor.
Therefore GstD3D11ScreenCapture object must be able to handle a case
where a src element holds different GstD3D11Device object. Which can
happen when GstD3D11Device context is not shared by pipelines.
What's changed:
* Allocates capture texture with D3D11_RESOURCE_MISC_SHARED for the
texture to be able to copied into other device's texture
* Holds additional shader objects per src element and use it when drawing
mouse
Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1197
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2366>
mp4mux can't negotiate caps with upstream/downstream and always outputs
specific caps based on the input streams. This will always happen before
it produces the first buffers.
By having the default aggregator negotiation enabled the same caps
would be pushed twice in the beginning, and again every time a
reconfigure event is received.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
mxfmux can't negotiate caps with upstream/downstream and always outputs
specific caps based on the input streams. This will always happen before
it produces the first buffers.
By having the default aggregator negotiation enabled the same caps
would be pushed twice in the beginning, and again every time a
reconfigure event is received.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
mpegtsmux can't negotiate caps with upstream/downstream and always outputs
specific caps based on the input streams. This will always happen before
it produces the first buffers.
By having the default aggregator negotiation enabled the same caps
would be pushed twice in the beginning, and again every time a
reconfigure event is received.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
flvmux can't negotiate caps with upstream/downstream and always outputs
specific caps based on the input streams. This will always happen before
it produces the first buffers.
By having the default aggregator negotiation enabled the same caps
would be pushed twice in the beginning, and again every time a
reconfigure event is received.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
Otherwise setting the srcpad caps based on the sinkpad caps event will
already push a segment event downstream before the upstream segment is
known.
If the upstream segments are just forwarded when the upstream segment
event arrives this would result in two segment events being sent
downstream, of which the first one will usually be simply wrong.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
Baseclass calls get_preferred_output_delay() in a chain of
sequence header parsing and then new_sequence() is called
with required DPB size (includes render-delay) information.
Thus latency query should happen before the sequence header
parsing for subclass to report required render-delay accordingly
via get_preferred_output_delay() method.
(e.g., zero delay in case of live pipeline)
This commit is to fix wrong liveness signalling in case of
upstream packetized format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2363>
Baseclass calls get_preferred_output_delay() in a chain of
sequence header parsing and then new_sequence() is called
with required DPB size (includes render-delay) information.
Thus latency query should happen before the sequence header
parsing for subclass to report required render-delay accordingly
via get_preferred_output_delay() method.
(e.g., zero delay in case of live pipeline)
This commit is to fix wrong liveness signalling in case of
upstream packetized format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2364>
In the case where not all streams have received any data, growing the interleave
by only 100ms is too restrictive and would cause some (valid) mpeg-ts streams to
hang.
Bump up the interleave growth rate for those use-cases to 500ms per input (still
up to the limit of 5s).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2370>
If there weren't any moved/dirty regions in the captured frame, the
viewport of the ID3D11DeviceContext would be left at whatever previous
value it had, which could lead to the cursor being drawn in a wrong
position and/or in an incorrect size.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2362>
Make all codecs consistent so that subclass can know additional DPB
size requirement depending on render-delay configuration regardless
of codec. Note that render-delay feature is not implemented for AV1
yet but it's planned.
Also, consider new_sequence() is mandatory requirement, not optional
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2343>
As implemented, we only support OpenGL 3 API from version 3.2. Though, there
is no issue enabling GLSL 1.30 even if we are going to restrict our API usage
to 2. This allows using texelFetch() on OpenGL 3.0 and 3.1 drivers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2190>
Since the addition of tiling format with subsampled tile size
(NV12_16L32S), getting the tile width/height shifts and tile
size have become more complex. Add a helper to extract and
scale this information for the selected plane and format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2190>
Since both g_value_set_object() and g_weak_ref_get() takes a reference
there will be two new references to the GstWebRTCICE object when there
should be only one. g_value_take_object() has the same functionality as
g_value_set_object() but does not take a reference.
Without this change, the GstWebRTCICE object will be leaked.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2333>
Mixing C loops with switch statements is a bad idea as break has a
different meaning in both. Breaking inside the switch statements wrongly
caused further loop iterations.
Instead use goto to get out of the loop and continue to do another loop
iteration, and never ever use break except for the end of a case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2336>
Some streams have 2 PMT sections in a single TS packet. The first one is "valid"
but doesn't contain/define any streams. That causes an unrecoverable issue when
we try to activate the 2nd (valid) PMT.
Instead of doing that, pre-emptively refuse to process PMT without any streams
present within. We still do post that section on the bus to inform applications.
Fixes#1181
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2310>
In push mode (streaming), if the received chunk buffer size from _chain is bigger
than output buffer size, the flags of the divided-buffers are propagated to the
DISCONT flag from first received chunk buffer. This unexpected buffers contained DISCONT
flags are abnormally transformed when changing the sampling rate by audioresample element.
So unset unnecessary DISCONT flag before pad_push().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2305>
Previously, we only added it when actually performing synchronization
based on the NTP time.
The information can be useful downstream in other situations too, and
we can compute a NTP time as soon as we get a sender report with the
relevant information.
Co-authored-by: Mathieu Duponchelle <mathieu@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2252>
The decision to store the input buffer depends on whether extensions
are to be added to the output buffer, I assume as an optimization.
This creates an issue for subclasses that call negotiate(), where
header_exts is actually populated, from their handle_buffer()
implementation: at chain time, no header extension has been negotiated
yet, which means that we don't add extensions to the first batch of
buffers that comes out.
Keep track of whether negotiate has been called (this is different
from the negotiated field) and always store the input buffer until
then. This fixes the issue while largely preserving the optimization.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2304>
The previous iteration of the code was inferring the type of the
frame by looking at the overall size of the gst-payloaded packet.
It is more robust to actually parse the payload and look at the
actual data buffers it contains.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2303>
regardless of whether they are input as individual buffers or
buffer lists.
The ONVIF specification requires all packets to hold the extension,
it makes no sense to behave differently when handling buffer lists.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2303>
Pipeline such as:
gst-launch-1.0 -vf videotestsrc ! video/x-raw,format=NV12,colorimetry=\(string\)bt709 \
! videoscale ! video/x-raw,format=I420 ! fakesink
Always trigger a error:
ERROR video-info video-info.c:556:gst_video_info_from_caps: no width property given
Because it is called before the fixate_size(), the src caps' resolution
may be absent or not fixed. That causes that the src video info can not
be created correctly and we can not inherit the colorimetry and chroma-site
from the input caps.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2289>
Fixing this pipeline:
gst-launch-1.0 filesrc location=sample.png ! pngdec ! videorate ! fakesink
- videorate receives a single buffer with pts = 0, duration = invalid;
- then it receives eos triggering this buffer to be pushed downstream;
- the pushing code was assuming that a duration was set, which is
impossible as we received a single buffer and no output framerate was
set either. So the best we can do is to push the buffer without
duration.
Fix#1177
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2296>
The va pool is used for GPU side surface/image, its alignment should
not be changed arbitrarily by others. So we decide not to expose the
GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT flag anymore.
Instead, user can call gst_buffer_pool_config_set_va_alignment() to
set its surface/image alignment.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2282>
According to spec:
color range equal to 0 shall be referred to as the studio swing
representation and color range equal to 1 shall be referred to as
the full swing representation.
The current status is just the opposite.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2288>
GAP events flagged with MISSING_DATA are transformed into GAP buffers
flagged with CORRUPTED.
In these cases, it is preferable to simply keep rendering the previous
buffer (if there was one) instead of flashing the pad in and out of
view.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/708>
When the GAP event was flagged with MISSING_DATA, subclasses
may want to adopt a different behaviour, for example by repeating
the last buffer.
As we turn these gap events into gap buffers, we need to flag
those, we do so with a new custom meta.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/708>
Returning TRUE from the `transform_meta` function tells
GstBaseTransform to copy the meta into the new buffer. If videoscale
has already transformed a meta by scaling it, it should always return
FALSE to avoid duplicating the meta.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1630>
Meson generates a gdbinit file that will automatically load gstreamer
script. However that script uses a helper python module that needs
PYTHONPATH to be pointing into the right location in the source
tree to be able to find gst_gdb.py.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1796>
When the sink is configured to create sockets with an explicit bind
address, then the created socket gets set to the udp_socket field
irregardless of whether the bind address indicated that the socket
family should be IPv4 or IPv6. When binding to an IPv6 address, this
results in the following error:
gstmultiudpsink.c:1285:gst_multiudpsink_configure_client:<rtcpsink>
error: Invalid address family (got 10)
This patch adds a check of the address family being bound to and sets
the created socket to used_socket or used_socket_v6, accordingly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1551>
RFC 8216 6.3.3 "Playing the Media Playlist File" : states that for live media
playlists "the client SHOULD NOT choose a segment that starts less than three
target durations from the end of the Playlist file"
This is an off-by-one error. Since we are looking for the "index" of the
segment, we need to subtract 1 from the searched position.
Ex: For a playlist with 12 entries, we want to start playback on the 9th segment
... which is at index 8.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2259>
When we fixup src caps, the current way of handling the HDR fields is not
correct.
1. We trim the HDR fields only when the input caps is not a subset of the
fixup src caps. But in fact, the input caps with HDR fields such as the
"mastering-display-info" can possibly be the subset of the fixup src caps,
if they have all same other fields.
2. We always copy the colorimetry from input caps to src caps if it is
absent. But when hdr-tone-mapping is enabled, the HDR->SDR conversion makes
the colorimetry change. We should use downstream's setting, or just use the
default colorimetry of SDR.
We changes to:
1. If hdr-tone-mapping is enabled, we trim all HDR fields and add a correct
colorimetry.
2. Copy colorimetry from input if it is still absent.
3. Consider the subset replacement.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2244>
Since d0133a2d11 "videoconvert: Allow
passthrough for ANY caps features" videoconvert will always claim that
it supports any kind of memory which is true in very specific case (when
it is running in passthrough mode). To get elements that autoplug
converters depending on the caps running in the pipeline (like
autovideoconvert), we need to have converters no lie about what they can
do when queried `accept_caps` or `query_caps`.
This still accepts any caps feature as before but it introduces
a restriction in the way we handle memory capsfeatures.
We keep previous behaviour in videoconvert and videoscale.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/898>
Now that videoconvert and videoscale's are both based on
GstVideoConverter and are using the exact same code, it makes much more
sense to have one element doing the two operation, and it can be
more efficient in some cases (one single path for both operations).
This removes the `videoscale` and `videoconvert` plugins but keeps the element
but makes them also do both operations (adding some APIs to each element).
There is a small change in API for the `videoscale:dither` property which
was previously a totally unused boolean, it is now an enum and is used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/898>
The RTCP SR packet might be without SDES in case of a reduced-size RTCP
packet. For syncing purposes the CNAME is needed but it might be known
already from an earlier RTCP packet or out of band, via the SDP for
example.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2132>
The format of the caps fields is
ssrc-(SSRC_VALUE)-(ATTRIBUTE_NAME)=(ATTRIBUTE_VALUE)
.
Parsing of the attributes from the caps into the SDP is not implemented
as this depends not only a single stream's caps but on the whole rtpbin
configuration.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2132>
Found via an analyzed build for Clang. Specifically we had:
gstav1parse.c[1850,11] in gst_av1_parse_detect_stream_format: Logic error: The left operand of '==' is a garbage value
gstav1parse.c[1606,11] in gst_av1_parse_handle_to_small_and_equal_align: Logic error: The left operand of '==' is a garbage value
Also a couple of false-positives:
gstav1parse.c[1398,24] in gst_av1_parse_handle_one_obu: Logic error: Branch condition evaluates to a garbage value
gstav1parse.c[1440,37] in gst_av1_parse_handle_one_obu: Logic error: The left operand of '-' is a garbage value
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2230>
GLib guarantees libintl is always present, using proxy-libintl as
last resort. There is no need to mock gettex API any more.
This fix static build on Windows because G_INTL_STATIC_COMPILATION must
be defined before including libintl.h, and glib does it for us as part
as including glib.h.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2028>
If a file includes a new version of a plugin that exits in the
registry, the output of gst-inspect is incorrect. The output has the
correct version but incorrect filename, and element description.
This seems to have also fixed some documentation issues.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1344>
Our decoder implementation does not use downstream d3d11 pool for
decoding because of special requirement of D3D11/DXVA. So preallocation
using the downstream buffer pool will waste GPU memory in most cases.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2211>
This provides new HLS, DASH and MSS adaptive demuxer elements as a single plugin.
These elements offer many improvements over the legacy elements. They will only
work within a streams-aware context (`urisourcebin`, `uridecodebin3`,
`decodebin3`, `playbin3`, ...).
Stream selection and buffering is handled internally, this allows them to
directly manage the elementary streams and stream selection.
Authors:
* Edward Hervey <edward@centricular.com>
* Jan Schmidt <jan@centricular.com>
* Piotrek Brzeziński <piotr@centricular.com>
* Tim-Philipp Müller <tim@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2117>
This reverts commit 652773de36 and
modifies it to rename the caps field name to coded-picture-structure.
It was previously removed because it confuses the decoder and we didn't
have a valid use case for including it in the encoded caps at this
stage. We now do have such a use case but still don't want to confuse
the decoder, so the field is renamed.
However, it is still not accurate without looking at the SEI picture
structure of each frame, so it was named coded-picture-structure. If its
value is "frame" it is most likely progressive, if it's "field" it is
most likely interlaced or mixed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2177>
get_merged_collection() returns an owned stream collection and was
leaked in the else block.
Fix leak when running:
GST_TRACERS=leaks GST_DEBUG="GST_TRACER:7,leaks:6" gst-play-1.0 --use-playbin3 test.mkv
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/954>
Make sure that the requested stream selection isn't identical to the current
one. If that's the case, just carry on as usual.
This avoids multiple `streams-selected` posting ... when the selection didn't
change.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2185>
* glimagesink is not a recommended one on Windows
* Remove directdrawsink section
* d3dvideosink is legacy and should not be recommended
* Add d3d11videosink part
* directsoundsink should be deprecated
* Add wasapisink/wasapi2sink part
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2144>
The current way names the level by the number of B frames it contains, the
less it contains, the higher level it is. So the non ref B frames are in the
lowest layer and the B frames in the highest level refer to I/P frames.
But the widely used way is just the opposite, the ref B frames are in the
lower level and non ref B frames are at the highest level.
The is just a terminology change, and does not have any effect for compression
result and quality.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2149>
It doesn't matter for measurement purposes whether receiving them takes
a while and various PTP servers are not prioritizing to send them,
causing them to be dropped unnecessarily and preventing proper
synchronization with such servers.
This is especially a problem if the RTTs in the network are very low
compared to the additional delay imposed by the server.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2161>
timeapi.h is missing in our MinGW toolchain. Include mmsystem.h
header instead, which defines struct and APIs in case of our MinGW
toolchain. Note that in case of native Windows10 SDK (MSVC build),
mmsystem.h will include timeapi.h
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2153>
In case of re-syncing (i.e. moving to another partition to avoid too much of an
interleave), there was previously no checks to figure out whether a given
partition was already fully handled (i.e. when coming across it again after a
previous resync).
In order to handle this at least for single-track partitions, check whether we
have reached the essence track duration, and if so skip the partition.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2150>
The essence track position should only be overriden if we sucesfully switched to
another position. In case of EOS we do not want to override it else we would
increase the track position *again* at the end of this function
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2150>