This wasn't really done, and is needed in order to detect potential section
changes for sections that have got identical information (such as when switching
between streams that have the same PAT/PMT pid and subtable information).
Other checks exist in tsbase to detect if the "new" PAT/PMT really is an update or not.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3530>
An end packet is only produced once for the last subtitle, so multiple
GAP events between subtitles would result only in a single end packet
and nothing else otherwise. This would potentially starve downstream
then, so instead forward the GAP events in that case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3534>
This reverts commit fcad4cc646.
This was wrong is so many ways.
* The memcmp was badly used (it should use == 0 to check the data is identical,
and not != 0)
* There was no boundary checks on the present stream section_data when passing
it to memcmp.
* The return value should have been TRUE (i.e. we have done all checks, none of
them failed, therefore the section has been seen before)
* stream->section_data would *always* be NULL if the section had already been
processed
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1559
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3421>
Add support for more formats so as to run the libvpx high bit depth test suite.
This means the files under CONFIG_VP9_HIGHBITDEPTH
This also allows running the yuv444p 8bit file in the regular 8 bit vp9 suite.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3356>
The gap handling was in place, but there was no event handler to trigger it.
Implement the alpha sink event handler for the gaps. This fixes handling of
valid streams which may not refresh the alpha frames for every video frames.
It will also allow a clean error if the stream was missing the initial
alpha frame, at least until we find a better way to handle these
invalid frames.
Related to #1518
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3264>
When the output alignment is smaller than the input alignment, for
example, When the output alignment is "FRAME" and the parse is likely
connecting to a decoder, the current PTS setting for AV1 frames inside
a TU is not very correct.
For example, a TU may begin with non-displayed frames and end with a
displayed frame. The current way will assign the PTS to the first
non-displayed frame, which is a decode-only frame and the PTS will be
discarded in the video decoder. While the last displayed frame has
invalid PTS, and so the video decoder needs to guess its PTS based on
the frame rate and previous frame's PTS. This is not a decent and
robust way. And more important, when the previous frames provide DTS,
the video decoder will also guess the PTS based on the previous frames'
DTS and trigger the warning like:
gstvideodecoder.c:3147:gst_video_decoder_prepare_finish_frame: \
<vavp9dec0> decreasing timestame
It sets the reordered_output and makes the decoder in free run mode.
We should correct the PTS for a TU, let the non-displayed frames have
no PTS while set the correct PTS to the displayed one. Also, when the
AV1 stream has multi spatial layers, there are more than one displayed
frames inside one TU with the same PTS.
Note: If the input alignment is not TU aligned, we can not know the
exact PTS of this TU, and so we just clear the PTS of the decode only
frame and leave others unchanged.
We also correct all the PTS if the output is OBU aligned. All their
PTS and DTS are set to the input buffer's PTS.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3182>
When the incoming data has big alignment than the output, we do not need to
call finish_frame() and exit the current handle_frame() for each splitted
frame. We can push them all at one shot with in one handle_frame(), whcih
may improve the performance and can help us to find the edge of TU.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3182>
If there is an error while connecting, the streaming task will be stopped, and
is_running() will be false, causing a GST_FLOW_FLUSHING to be returned. Instead,
we perform the error check (!self->connection) first, to return an error if
that's what occured.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3189>
When the alignment is "FRAME" and the parse is likely connecting to
a decoder, the current PTS setting for VP9 frames inside a super
frame is not very correct.
For example, the super frame may begin with non-displayed frames and
end with a displayed frame. The current way will assign the PTS to
the first non-displayed frame, which is a decode-only frame and the
PTS will be discarded in the video decoder. While the last displayed
frame has invalid PTS, and so the video decoder needs to guess its
PTS based on the frame rate and previous frame's PTS. This is not a
decent and robust way. And more important, when the previous frames
provide DTS, the video decoder will also guess the PTS based on the
previous frames' DTS and trigger the warning like:
gstvideodecoder.c:3147:gst_video_decoder_prepare_finish_frame: \
<vavp9dec0> decreasing timestame
It sets the reordered_output and makes the decoder in free run mode.
We should correct the PTS for a super frame, let the non-displayed
frames have no PTS while set the correct PTS to the displayed one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3155>
Since commit a79a756b79 we could change to ignore-pcr automatically at 500ms
into a live stream when no PCR is seen by then. However the stream counting in
program change detection was wrongly considering ignore-pcr programs to have a
separate PCR PID, even though we are actually ignoring the PCR PID completely,
resulting in an erroneous program switch getting triggered from the different
stream count. This in turn would send an EOS and switch out the pads for what
actually is still the same program, while we intended to simply apply a
workaround for broken encoders.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3060>
With the 2.72 release, glib-networking developers have decided that
TLS certificate validation cannot be implemented correctly by them, so
they've deprecated it.
In a nutshell: a cert can have several validation errors, but there
are no guarantees that the TLS backend will return all those errors,
and things are made even more complicated by the fact that the list of
errors might refer to certs that are added for backwards-compat and
won't actually be used by the TLS library.
Our best option is to ignore the deprecation and pass the warning onto
users so they can make an appropriate security decision regarding
this.
We can't deprecate the tls-validation-flags property because it is
very useful when connecting to RTSP cameras that will never get
updates to fix certificate errors.
Relevant upstream merge requests / issues:
https://gitlab.gnome.org/GNOME/glib/-/merge_requests/2214https://gitlab.gnome.org/GNOME/glib-networking/-/issues/179https://gitlab.gnome.org/GNOME/glib-networking/-/merge_requests/193
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2494>
There might be a sequence of event and buffer flow:
- Got stream-start/caps/segment events
- Got flush events
- And then buffers with a new segment event
In the above case, stream-start and caps event might not be reached to
peer proxysrc if peer proxysrc is not ready to receive them.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1552>
The commit b90d0274 introduces uninitialized width and height when we
consider to change the "pixel-aspect-ratio" for some interlaced stream.
We need to check the resolution in the src caps, and if no resolution
info found, there is no need to consider the aspect ratio.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2630>
Some encoders (e.g. Makito) have H265 field-based interlacing, but then
also specify an 1:2 pixel aspect ratio. That makes it kind-of work with
decoders that don't properly support field-based decoding, but makes us
end up with the wrong aspect ratio if we implement everything properly.
As a workaround, detect 1:2 pixel aspect ratio for field-based
interlacing, and check if making that 1:1 would make the new display
aspect ratio common. In that case, we override it with 1:1.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2577>
Adding a uri interface enables plugging in RFB/VNC sources to anything
that makes use of uridecodebin:
gst-play-1.0 rfb://:password@10.40.216.180:5903?shared=1
Use userinfo to pass user (ignored) and password, other key/value pairs
can be encoded in the query part of the URI (see shared)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1963>
Now it uses the JPEG parser in libgstcodecparsers, while the whole
code is simplified by relying more in baseparser class for tag
handling.
The element now signals chroma-format and default framerate is 0/1,
which is for still-images.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1473>
Until March 2022, the FFmpeg MXF muxer would write the various index table
segments with the same instance ID, which should only be used if it is a
duplicate/repeated table.
In order to cope with those, we first compare the other index table segment
properties (body/index SID, start position) before comparing the instance
ID. This will ensure that we don't consider them as duplicate, but can still
detect "real" duplicates (which would have the same other properties).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2407>
mxfmux can't negotiate caps with upstream/downstream and always outputs
specific caps based on the input streams. This will always happen before
it produces the first buffers.
By having the default aggregator negotiation enabled the same caps
would be pushed twice in the beginning, and again every time a
reconfigure event is received.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
mpegtsmux can't negotiate caps with upstream/downstream and always outputs
specific caps based on the input streams. This will always happen before
it produces the first buffers.
By having the default aggregator negotiation enabled the same caps
would be pushed twice in the beginning, and again every time a
reconfigure event is received.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
Some streams have 2 PMT sections in a single TS packet. The first one is "valid"
but doesn't contain/define any streams. That causes an unrecoverable issue when
we try to activate the 2nd (valid) PMT.
Instead of doing that, pre-emptively refuse to process PMT without any streams
present within. We still do post that section on the bus to inform applications.
Fixes#1181
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2310>
regardless of whether they are input as individual buffers or
buffer lists.
The ONVIF specification requires all packets to hold the extension,
it makes no sense to behave differently when handling buffer lists.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2303>
According to spec:
color range equal to 0 shall be referred to as the studio swing
representation and color range equal to 1 shall be referred to as
the full swing representation.
The current status is just the opposite.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2288>
Found via an analyzed build for Clang. Specifically we had:
gstav1parse.c[1850,11] in gst_av1_parse_detect_stream_format: Logic error: The left operand of '==' is a garbage value
gstav1parse.c[1606,11] in gst_av1_parse_handle_to_small_and_equal_align: Logic error: The left operand of '==' is a garbage value
Also a couple of false-positives:
gstav1parse.c[1398,24] in gst_av1_parse_handle_one_obu: Logic error: Branch condition evaluates to a garbage value
gstav1parse.c[1440,37] in gst_av1_parse_handle_one_obu: Logic error: The left operand of '-' is a garbage value
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2230>
GLib guarantees libintl is always present, using proxy-libintl as
last resort. There is no need to mock gettex API any more.
This fix static build on Windows because G_INTL_STATIC_COMPILATION must
be defined before including libintl.h, and glib does it for us as part
as including glib.h.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2028>
This reverts commit 652773de36 and
modifies it to rename the caps field name to coded-picture-structure.
It was previously removed because it confuses the decoder and we didn't
have a valid use case for including it in the encoded caps at this
stage. We now do have such a use case but still don't want to confuse
the decoder, so the field is renamed.
However, it is still not accurate without looking at the SEI picture
structure of each frame, so it was named coded-picture-structure. If its
value is "frame" it is most likely progressive, if it's "field" it is
most likely interlaced or mixed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2177>
In case of re-syncing (i.e. moving to another partition to avoid too much of an
interleave), there was previously no checks to figure out whether a given
partition was already fully handled (i.e. when coming across it again after a
previous resync).
In order to handle this at least for single-track partitions, check whether we
have reached the essence track duration, and if so skip the partition.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2150>
The essence track position should only be overriden if we sucesfully switched to
another position. In case of EOS we do not want to override it else we would
increase the track position *again* at the end of this function
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2150>
They are part of gst_dep already and we have to make sure to always have
gst_dep. The order in dependencies matters, because it is also the order
in which Meson will set -I args. We want gstreamer's config.h to take
precedence over glib's private config.h when it's a subproject.
While at it, remove useless fallback args for gmodule/gio dependencies,
only gstreamer core needs it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2031>
Make it posible to configure the element to obtain the timestamps from
reference timestamp meta data instead of using the ntp-offset property,
or estimating its own offset. Currently the only time format supported
is "timestamp/x-unix", i.e. UTC time expressed in the unix time epoch.
In addition the custom event GstNtpOffset has been renamed to
GstOnvifTimestamp, to reflect that it is not necessarily used to convey
the ntp-offset. As a consequence we had to modify a couple of files in
the rtsp-server as well.
Fixes#984
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1683>
There could be a case where the new program has the same program number as the
previous one ... but is actually located on a PID previously used for elementary
stream. In that case the program is guaranteed to not be an update of the
previous program but a completely new one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1893>
We need to be able to look for programs by their PID also. Using a hash table
was a bit sub-par (and overkill) for storing a range of programs.
This is needed because there could potentially be two programs with the same
program id but different PMT PID (while one is being deactivated the new one
would "exist").
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1893>
And use the output segment position for the outgoing timestamp while it
is. This is needed to delay the calculation of `output_ts_offset` until
we actually have a usable timestamp, as tsmux will output a few initial
packets while `last_ts` is still unset.
Without this, the calculation would use the initial `0` value, which did
not have the intended effect of making VBR mode behave like CBR mode,
but always calculated an offset equal to the selected start time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1884>
This was to support very old V4L2 kernel. As we moved to DMABuf and can now
detach buffers on renegotiation, the buffer it tries to fix no longer exist.
The risk to blocking indefinitly the application does still exist though.
Fixes#1070
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1861>
When we negotiate with downstream, We should use the intersected
caps of input and output to decide the alignment and stream format.
The current code just uses the input caps which may lack the stream
format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1837>
The demux now outputs the AV1 stream in "tu" alignment, so we do not need
to detect the input alignment. But the annex b stream format is not recognized
by the demux, we still need to detect that stream format for the first input.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1837>
Decoders that required frame aligmment and didn't have an associated
alpha decoder were skipped. This is because the parser was constructing
caps based on the software alpha decoder, which specify super-frame
alignment.
Iterate over the caps to filter the one that have a matching codec-alpha, with
the semantic the no codec-alpha field means codec-alpha=false. Then if
everything was removed, callback to the original, so that the first non-alpha
decoder will be picked.
Fixes#820
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1855>
If present, add '-lsocket' and '-lnsl' to network_deps.
ext/curl/meson.build: add network_deps to dependencies
gst/festival/meson.build: same
sys/shm/meson.build: same
Fixes linking issues on Illumos distros.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1525>
The current manner for deciding the new temporal unit is based on
temporal delimiter(TD) OBU. We only start a new temporal unit when
the TD comes.
But some streams do not have TD at all, which makes the output "TU"
alignment fail to work. We now add check based on the relationship
between the different layers and it can successfully judge the TU edge.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1634>
Some streams may have problematic OBUs at the beginning, which causes
the parse fail to detect the alignment and return error. For example,
there may be verbose OBUs before a valid sequence, which should be
discarded until we meet a valid sequence. We should let the parse
continue when we meet such cases, rather than just return error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1634>
If the VANC track does contain packets, but we skip over all packets, just
treat it the same as if there hadn't been any packets at all and send a
GAP event instead of erroring out with "Failed to handle essence element".
We would error out because when we reach the end of the loop without having
found a closed caption packet the flow return variable is still FLOW_ERROR
which is what it has been initialised to.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1518>
When using playbin3, it seems that the alpha decode is always first to
push caps and run an allocation query. As the format change from sink
and alpha were not synchronized, the allocation query could endup
being run before the caps are pushed. That may lead to failing query,
which makes the decoder thinks there is no GstVideoMeta downstream and
most likely CPU copy the frame.
This patch implements a format cookie to track and synchronize the
format changes on both pads fixing the racy performance issue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1439>
This adds the alignment field to the template caps. Without this field
set, the auto-plugger will see fixed caps and will use
gst_caps_is_subset() against the caps produced by the parser. This is a
challenge for all cases where a parser can do conversion. This is fixed
by adding alignment field, which makes the auto-pluggers do an
intersection of the caps as it gets unfixed caps after intersection now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1439>
With mpeg4videoparse drop=false config-interval=N|-1 we might be
trying to insert a config before we have actually received one,
in which case we'll try to map a NULL buffer which will generate
lots of criticals.
Fixes#855
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1265>
Trying to reset before the pads have been deactivated races with the
streaming thread. There was also a buggy buffer clear leaving a dangling
`stored_frame` pointer around. Use `gst_interlace_reset` so this happens
properly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1039>
There are streams in the wild that have to add a SCTE-35 trigger in
another e.g. GA94 stream. Most encoders would replace the GA94
descriptor ID with the CUEI one temporarily, but there are some that
will add two registration ID descriptors, one with GA94 and one with
CUEI.
Failing to parse the CUEI registration ID in that case would return
FALSE in _stream_is_private_section , therefore setting it as known PES
and pushing packets downstream instead of calling handle_psi.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/979>
We should also take into account whether data is currently pending when checking
for gap on streams. It could very well be that some streams have very low
bitrate (and spread out) data. For those we don't want to push out a gap event.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1179>
This is only enabled in push time mode. Furthermore it's only enabled for now if
PCR is to be ignored.
The problem is dealing with streams where the initial PTS/DTS observation might
be greater than following ones (from other PID for example). Before this patch,
this would result in sending buffers without any timestamp which would cause a
wide variety of issues.
Instead, pad segment and buffer timestamps with an extra
value (packetizer->extra_shift, default to 2s), to ensure that we can get valid
timestamps on outgoing buffers (even if that means they are before the segment
start).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1179>
When using the following setup (the error can be reproduced using
simpler sender pipelines), the receiver resynchronises the clock on RTCP
packets. The effect was that a couple seconds were cut out of the
playback because an initial RTCP packet was dropped.
When sending out all RTCP packets (setting sync=FALSE on the RTCP
updsink), the playback is fine.
This syncs rtpsink with rtpsrc (where this property was already set).
gst-launch-1.0 filesrc location=899-en.mp3 \
! mpegaudioparse \
! mpg123audiodec \
! audioconvert \
! audioresample \
! avenc_g722 \
! rtpg722pay
! rtpsink uri=rtp://239.1.2.3:1234
gst-launch-1.0 uridecodebin rtp://239.1.2.3:1234?encoding-name=G722 \
! autoaudiosink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/993>
When there are elements between the demuxer and the muxer that
introduce an offset to the running time, or when offsets are
set on pads by the application, this shift must be taken into
account when calculating the final pts_adjustement.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/913>
mpegtsmux can receive SCTE sections from two origins: events
created by the application, and events forwarded downstream by
mpegtsdemux, containing sections that may not have been fully
parsed, and additional data to help tsmux translate times to
the correct domain, both for requesting keyframes and calculating
an accurate pts_adjustment.
The complete approach is documented further in a comment above
the relevant function.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/913>
Instead of modifying the splice times in the incoming sections
to running time and expecting eg mpegtsmux to convert those back
to its local PES time domain, which might be impossible when
those splice times are encrypted or the specification is extended,
transmit the needed information to the muxer as separate fields in
the event:
* A pts offset field can be used by the muxer in order to calculate
a final pts_adjustment
* A rtime_map can be used by the muxer to determine the correct
running times at which it should request keyframes
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/913>
Makes it possible to support passing SCTE 35 cue points from
demuxer to muxer, while preserving correct timing.
This will also improve ex nihilo cue points injection, as splice
times and durations are now interpreted as running time values,
and may trigger key unit requests.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/913>