This reverts commit 652773de36 and
modifies it to rename the caps field name to coded-picture-structure.
It was previously removed because it confuses the decoder and we didn't
have a valid use case for including it in the encoded caps at this
stage. We now do have such a use case but still don't want to confuse
the decoder, so the field is renamed.
However, it is still not accurate without looking at the SEI picture
structure of each frame, so it was named coded-picture-structure. If its
value is "frame" it is most likely progressive, if it's "field" it is
most likely interlaced or mixed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2177>
In case of re-syncing (i.e. moving to another partition to avoid too much of an
interleave), there was previously no checks to figure out whether a given
partition was already fully handled (i.e. when coming across it again after a
previous resync).
In order to handle this at least for single-track partitions, check whether we
have reached the essence track duration, and if so skip the partition.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2150>
The essence track position should only be overriden if we sucesfully switched to
another position. In case of EOS we do not want to override it else we would
increase the track position *again* at the end of this function
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2150>
They are part of gst_dep already and we have to make sure to always have
gst_dep. The order in dependencies matters, because it is also the order
in which Meson will set -I args. We want gstreamer's config.h to take
precedence over glib's private config.h when it's a subproject.
While at it, remove useless fallback args for gmodule/gio dependencies,
only gstreamer core needs it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2031>
Make it posible to configure the element to obtain the timestamps from
reference timestamp meta data instead of using the ntp-offset property,
or estimating its own offset. Currently the only time format supported
is "timestamp/x-unix", i.e. UTC time expressed in the unix time epoch.
In addition the custom event GstNtpOffset has been renamed to
GstOnvifTimestamp, to reflect that it is not necessarily used to convey
the ntp-offset. As a consequence we had to modify a couple of files in
the rtsp-server as well.
Fixes#984
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1683>
There could be a case where the new program has the same program number as the
previous one ... but is actually located on a PID previously used for elementary
stream. In that case the program is guaranteed to not be an update of the
previous program but a completely new one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1893>
We need to be able to look for programs by their PID also. Using a hash table
was a bit sub-par (and overkill) for storing a range of programs.
This is needed because there could potentially be two programs with the same
program id but different PMT PID (while one is being deactivated the new one
would "exist").
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1893>
And use the output segment position for the outgoing timestamp while it
is. This is needed to delay the calculation of `output_ts_offset` until
we actually have a usable timestamp, as tsmux will output a few initial
packets while `last_ts` is still unset.
Without this, the calculation would use the initial `0` value, which did
not have the intended effect of making VBR mode behave like CBR mode,
but always calculated an offset equal to the selected start time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1884>
This was to support very old V4L2 kernel. As we moved to DMABuf and can now
detach buffers on renegotiation, the buffer it tries to fix no longer exist.
The risk to blocking indefinitly the application does still exist though.
Fixes#1070
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1861>
When we negotiate with downstream, We should use the intersected
caps of input and output to decide the alignment and stream format.
The current code just uses the input caps which may lack the stream
format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1837>
The demux now outputs the AV1 stream in "tu" alignment, so we do not need
to detect the input alignment. But the annex b stream format is not recognized
by the demux, we still need to detect that stream format for the first input.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1837>
Decoders that required frame aligmment and didn't have an associated
alpha decoder were skipped. This is because the parser was constructing
caps based on the software alpha decoder, which specify super-frame
alignment.
Iterate over the caps to filter the one that have a matching codec-alpha, with
the semantic the no codec-alpha field means codec-alpha=false. Then if
everything was removed, callback to the original, so that the first non-alpha
decoder will be picked.
Fixes#820
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1855>
If present, add '-lsocket' and '-lnsl' to network_deps.
ext/curl/meson.build: add network_deps to dependencies
gst/festival/meson.build: same
sys/shm/meson.build: same
Fixes linking issues on Illumos distros.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1525>
The current manner for deciding the new temporal unit is based on
temporal delimiter(TD) OBU. We only start a new temporal unit when
the TD comes.
But some streams do not have TD at all, which makes the output "TU"
alignment fail to work. We now add check based on the relationship
between the different layers and it can successfully judge the TU edge.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1634>
Some streams may have problematic OBUs at the beginning, which causes
the parse fail to detect the alignment and return error. For example,
there may be verbose OBUs before a valid sequence, which should be
discarded until we meet a valid sequence. We should let the parse
continue when we meet such cases, rather than just return error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1634>
If the VANC track does contain packets, but we skip over all packets, just
treat it the same as if there hadn't been any packets at all and send a
GAP event instead of erroring out with "Failed to handle essence element".
We would error out because when we reach the end of the loop without having
found a closed caption packet the flow return variable is still FLOW_ERROR
which is what it has been initialised to.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1518>
When using playbin3, it seems that the alpha decode is always first to
push caps and run an allocation query. As the format change from sink
and alpha were not synchronized, the allocation query could endup
being run before the caps are pushed. That may lead to failing query,
which makes the decoder thinks there is no GstVideoMeta downstream and
most likely CPU copy the frame.
This patch implements a format cookie to track and synchronize the
format changes on both pads fixing the racy performance issue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1439>
This adds the alignment field to the template caps. Without this field
set, the auto-plugger will see fixed caps and will use
gst_caps_is_subset() against the caps produced by the parser. This is a
challenge for all cases where a parser can do conversion. This is fixed
by adding alignment field, which makes the auto-pluggers do an
intersection of the caps as it gets unfixed caps after intersection now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1439>
With mpeg4videoparse drop=false config-interval=N|-1 we might be
trying to insert a config before we have actually received one,
in which case we'll try to map a NULL buffer which will generate
lots of criticals.
Fixes#855
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1265>
Trying to reset before the pads have been deactivated races with the
streaming thread. There was also a buggy buffer clear leaving a dangling
`stored_frame` pointer around. Use `gst_interlace_reset` so this happens
properly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1039>
There are streams in the wild that have to add a SCTE-35 trigger in
another e.g. GA94 stream. Most encoders would replace the GA94
descriptor ID with the CUEI one temporarily, but there are some that
will add two registration ID descriptors, one with GA94 and one with
CUEI.
Failing to parse the CUEI registration ID in that case would return
FALSE in _stream_is_private_section , therefore setting it as known PES
and pushing packets downstream instead of calling handle_psi.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/979>
We should also take into account whether data is currently pending when checking
for gap on streams. It could very well be that some streams have very low
bitrate (and spread out) data. For those we don't want to push out a gap event.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1179>
This is only enabled in push time mode. Furthermore it's only enabled for now if
PCR is to be ignored.
The problem is dealing with streams where the initial PTS/DTS observation might
be greater than following ones (from other PID for example). Before this patch,
this would result in sending buffers without any timestamp which would cause a
wide variety of issues.
Instead, pad segment and buffer timestamps with an extra
value (packetizer->extra_shift, default to 2s), to ensure that we can get valid
timestamps on outgoing buffers (even if that means they are before the segment
start).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1179>
When using the following setup (the error can be reproduced using
simpler sender pipelines), the receiver resynchronises the clock on RTCP
packets. The effect was that a couple seconds were cut out of the
playback because an initial RTCP packet was dropped.
When sending out all RTCP packets (setting sync=FALSE on the RTCP
updsink), the playback is fine.
This syncs rtpsink with rtpsrc (where this property was already set).
gst-launch-1.0 filesrc location=899-en.mp3 \
! mpegaudioparse \
! mpg123audiodec \
! audioconvert \
! audioresample \
! avenc_g722 \
! rtpg722pay
! rtpsink uri=rtp://239.1.2.3:1234
gst-launch-1.0 uridecodebin rtp://239.1.2.3:1234?encoding-name=G722 \
! autoaudiosink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/993>
When there are elements between the demuxer and the muxer that
introduce an offset to the running time, or when offsets are
set on pads by the application, this shift must be taken into
account when calculating the final pts_adjustement.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/913>
mpegtsmux can receive SCTE sections from two origins: events
created by the application, and events forwarded downstream by
mpegtsdemux, containing sections that may not have been fully
parsed, and additional data to help tsmux translate times to
the correct domain, both for requesting keyframes and calculating
an accurate pts_adjustment.
The complete approach is documented further in a comment above
the relevant function.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/913>
Instead of modifying the splice times in the incoming sections
to running time and expecting eg mpegtsmux to convert those back
to its local PES time domain, which might be impossible when
those splice times are encrypted or the specification is extended,
transmit the needed information to the muxer as separate fields in
the event:
* A pts offset field can be used by the muxer in order to calculate
a final pts_adjustment
* A rtime_map can be used by the muxer to determine the correct
running times at which it should request keyframes
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/913>
Makes it possible to support passing SCTE 35 cue points from
demuxer to muxer, while preserving correct timing.
This will also improve ex nihilo cue points injection, as splice
times and durations are now interpreted as running time values,
and may trigger key unit requests.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/913>