The convert-error signal is emitted whenever we get a GstFlowReturn
other than GST_FLOW_OK. The handler can then decide what to convert that
into - for instance, return the same GstFlowReturn to not convert it.
The default handler will act according to the ignore-error,
ignore-notlinked, ignore-notnegotiated and convert-to properties. If a
handler is connected, these properties are ignored.
In file included from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gstbin.h:27,
from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gst.h:35,
from ../gst/rtp/gstrtpsink.h:23,
from ../gst/rtp/gstrtpsink.c:49:
In function ‘gst_rtp_sink_start’,
inlined from ‘gst_rtp_sink_change_state’ at ../gst/rtp/gstrtpsink.c:509:11:
../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gstelement.h:422:18: error: ‘%s’ directive argument is null [-Werror=format-overflow=]
422 | gchar *__txt = _gst_element_error_printf text; \
../gst/rtp/gstrtpsink.c:476:3: note: in expansion of macro ‘GST_ELEMENT_ERROR’
476 | GST_ELEMENT_ERROR (self, RESOURCE, NOT_FOUND,
| ^~~~~~~~~~~~~~~~~
../gst/rtp/gstrtpsink.c: In function ‘gst_rtp_sink_change_state’:
../gst/rtp/gstrtpsink.c:477:37: note: format string is defined here
477 | ("Could not resolve hostname '%s'", remote_addr),
| ^~
In file included from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gstbin.h:27,
from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gst.h:35,
from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/rtp/gstrtpdefs.h:27,
from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/rtp/rtp.h:25,
from ../gst/rist/gstristsink.c:72:
In function ‘gst_rist_sink_setup_rtcp_socket’,
inlined from ‘gst_rist_sink_start’ at ../gst/rist/gstristsink.c:658:10,
inlined from ‘gst_rist_sink_change_state’ at ../gst/rist/gstristsink.c:801:13:
../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gstelement.h:422:18: error: ‘%s’ directive argument is null [-Werror=format-overflow=]
422 | gchar *__txt = _gst_element_error_printf text; \
../gst/rist/gstristsink.c:595:3: note: in expansion of macro ‘GST_ELEMENT_ERROR’
595 | GST_ELEMENT_ERROR (sink, RESOURCE, NOT_FOUND,
| ^~~~~~~~~~~~~~~~~
../gst/rist/gstristsink.c: In function ‘gst_rist_sink_change_state’:
../gst/rist/gstristsink.c:596:37: note: format string is defined here
596 | ("Could not resolve hostname '%s'", remote_addr),
| ^~
Allows for "low latency" mpeg-ts mode which is not standard, but somewhat common.
For this to work the sender has to put timestamps at a higher frequency than the spec requires.
PES packets with size 0 are unbounded, and
could therefore overflow the 32-bit size
accumulator.
Add a 32MB limit, which is larger than
any PES packet should ever get. If one does,
then output a 32MB chunk and continue.
A guint32 greater than 2^31 would be interpreted as negative by
gst_util_uint64_scale_int() and critical. Use the 64-bit integer version
of the function instead.
Don't signal a pipeline error when processing incomplete
j2pk PES packets that are too small. That can happen normally
during a DISCONT and shouldn't shut down the whole pipeline
Remove some custom and incomplete seek calculation
logic in favour of gst_segment_do_seek(), and
short-circuit any actual seeking or recalculation
if the position didn't change and just send an updated
segment directly.
This removes the custom seeking logic in favour of
using standard core seek handling.
The default behaviour of rtponviftimestamp is to drop buffers
outside the segment. This creates obvious problems for reverse
playback.
The ONVIF specification unfortunately doesn't describe how to handle
that specific use case, but we can expose a property to let the
user disable the dropping behaviour, and forward these buffers with
a G_MAXUINT64 ONVIF timestamp.
Also modify rtponvifparse to handle such timestamps appropriately.
We reject caps with other framerates as it's impossible to generate
timecodes unless we actually know a constant framerate. Reflect this
also in the pad template caps.
Instead of using a static hardcoded PCR interval, allow the user
to configure it.
Also revert back the default to a 40 ms interval, that was changed
in recent patches for no good reason.
There's no point in working with invalid LTC timestamps as all future
calculations will be wrong based on this, and invalid LTC timestamps can
sometimes be read via the audio input.
This patch just enforces boudaries for the access to the
standard_deviation array (64 floats). Such case can be
seen with a corrupted stream, where there's no hope to
obtain a valid decoded frame anyway.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1002
The D bit is meant to be set whenever there is a discontinuity
in transmission, and directly maps to the DISCONT flag.
The E bit is not meant to be set on every buffer preceding a
discontinuity, but only on the last buffer of a contiguous section
of recording. This has to be signaled through the unfortunately-named
"discont" field of the custom NtpOffset event.
Based on a patch by
Georg Lippitsch <glippitsch@toolsonair.com>
Vivia Nikolaidou <vivia@toolsonair.com>
Using libltc from https://github.com/x42/libltc
We now have a single property to select the timecode source that should
be applied, and for each timecode source the timecode is updated at
every frame. Then based on a set mode, the timecode is added to the
frame if none exists already or all existing timecodes are removed and
the timecode is added.
In addition the real-time clock is considered a proper timecode source
now instead of only allowing to initialize once in the beginning with
it, and also instead of just taking the current time we now take the
current time at the clock time of the video frame.
The SPS parsing functions take a parse_vui_param flag
to skip VUI parsing, but there's no indication in the output
SPS struct that the VUI was skipped.
The only caller that ever passed FALSE seems to be the
important gst_h264_parser_parse_nal() function, meaning - so the
cached SPS were always silently invalid. That needs changing
anyway, meaning noone ever passes FALSE.
I don't see any use for saving a few microseconds in
order to silently produce garbage, and since this is still
unstable API, let's remove the parse_vui_param.
Fix a recently introduced segfault. Don't de-reference a NULL
SPS pointer when attempting to update source caps before SPS
has been seen in the stream.
Interleaved frames can be fragmented between
incoming frames. Thus, we can have multiple
frames within the single input frame, as well as
incomplete frame. Now it preserves parsing
state and handle both situations.
Fixes#991
The length of the TCP payload is the IP plus TCP header length
subtracted from the IP datagram length specified in the IP header.
Prior to this, the size was calculated incorrectly, considering
all data after TCP header as a payload till the end of a packet.
Fixes#995
If recording is set to FALSE after the last audio or video buffer and
before the EOS event then recording stop is never signalled.
Similarly, we should signal recording stop once both audio and video are
EOS, regardless of the recording property, as there's nothing to be
recorded anymore.
We generally always prefer the information from upstream for other
metadata (pixel-aspect-ration, etc.) and should also do so here.
Other parsers (h264parse) already do the same.
When sps parsing fails we use a fallback sps from the caps, since we
have got an sps we need to update parser state and header as in the case the
sps was successfully parsed
When sps parsing fails we use a fallback sps from the caps, since we
have got an sps we need to update parser state and header as in the case the
sps was successfully parsed
Closes#503
Set more unhandled flags to general_constraint_indicator_flags field.
The field is required for building "Codecs" parameter as defined
ISO/IEC 14496-15 Annex E. The resulting "Codecs" string might be used
in various places (e.g., HLS/DASH manifest, browser, player, etc)
This is a re-implementation of the RTP elements that are submitted in
2013 to handle RTP streams. The elements handle a correct connection
for the bi-directional use of the RTCP sockets.
https://bugzilla.gnome.org/show_bug.cgi?id=703111
The rtpsink and rtpsrc elements add an URI interface so that streams
can be decoded with decodebin using the rtp:// interface.
The code can be used as follows
```
gst-launch-1.0 videotestsrc ! x264enc ! rtph264pay config-interval=3 ! rtpsink uri=rtp://239.1.1.1:1234
gst-launch-1.0 videotestsrc ! x264enc ! rtph264pay config-interval=1 ! rtpsink uri=rtp://239.1.2.3:5000
gst-launch-1.0 rtpsrc uri=rtp://239.1.2.3:5000?encoding-name=H264 ! rtph264depay ! avdec_h264 ! videoconvert ! xvimagesink
gst-launch-1.0 videotestsrc ! avenc_mpeg4 ! rtpmp4vpay config-interval=1 ! rtpsink uri=rtp://239.1.2.3:5000
gst-launch-1.0 rtpsrc uri=rtp://239.1.2.3:5000?encoding-name=MP4V-ES ! rtpmp4vdepay ! avdec_mpeg4 ! videoconvert ! xvimagesink
```
rtpmanagerbad: add pkg-config
rtpmanagerbad: Rtp should be uppercase
rtpmanagerbad: add G_OS_WIN32 for shielding unix headers
rtpmanagerbad: remove Since from documentation
rtpmanagerbad: rename lib name from nrtp to rtpmanagerbad
rtpmanagerbad: sync meson.build with other modules
rtpmanagerbad: add Makefile.am
rtpmanagerbad: use GstElement to count pads
rtpmanagerbad: use gst_bin_set_suppressed_flags
rtpmanagerbad: check element creation
rtpmanagerbad: post message when trying to access missing rtpbin
rtpmanagerbad: return FALSE with g_return tests
rtpmanagerbad: use gsocket multicast check
rtpmanagerbad: use gst_caps_new_empty_simple iso gst_caps_from_string
rtpmanagerbad: sync with gstrtppayloads.h
rtpmanagerbad: correct media type X-GST
rtpmanagerbad: test if a compatible pad was found
rtpmanagerbad: remove evil copy of GstRTPPayloadInfo
rtpmanagerbad: add gio_dep to meson
rtpmanagerbad: revert to old glib boilerplate
GStreamer 1.16 does not yet support the newer GLib templates, so revert.
rtpmanagerbad: return GST_STATE_CHANGE_NO_PREROLL for live sources
for live sources, NO_PREROLL should be returned for PLAYING->PAUSED and
READY->PAUSED transitions.
rtpmanagerbad: use GstElement pad counting
rtpmanagerbad: just use template name to request pad
rtpmanagerbad: remove commented code
rtpmanagerbad: use funnel to send multiple streams on one socket
rtpmanagerbad: avoid beaches
beaches should only be used during the summer, so rewrite the code to
return explicitly and avoid beaches during the winter.
rtpmanagerbad: add copyright to test code
rtpmanagerbad: g_free is NULL safe
rtpmanagerbad: do not trace rtpbin
rtpmanagerbad: return NULL explitly
rtpmanagerbad: warn when data port is not even
According to RFC 3550, RTP data should be sent on even ports, while RTCP
is sent on the following odd port.
rtpmanagerbad: document port allocation in rtpsink/src
rtpmanagerbad: improve uri description
rtpmanagerbad: add comment re-use socket
rtpmanagerbad: rename gst_object_set_properties_from_uri_query
rtpmanagerbad: loan prop/val setter from rist
rtpmanagerbad: rtpsrc: fix unitialised pointer
rtpmanagerbad: fix silly typo
rtpmanagerbad: test for empty key/value
rtpmanagerbad: rtpsrc: deprecate ssrc collision to INFO
rtpmanagerbad: sync debug with rist
rtpmanagerbad: small strings allocated on stack
rtpmanagerbad: correct rename
rtpmanagerbad: add locking on prop setters/getters
Locking is added because the URI allows to access the properties too.
rtpmanagerbad: allow for RTCP through NAT
rtpmanagerbad: move gio to header file
rtpmanagerbad: free small strings too
rtpmanagerbad: ttl_mc for ttl on dynudpsink
rtpmanagerbad: add comments on the URI registered
rtpmanagerbad: correct macro after file rename
rtpmanagerbad: code style
rtpmanagerbad: handle wrong URIs in setter
rtpmanagerbad: nit URI notation correction
In an URI, the first key/value pair should not have an ampersand, the
parser did not die though.
As sections can be provided by the user through send_event
when the element state is NULL, their lifetime is expected
to match that of the muxer, and they must be preserved when
the state changes
We can have multiple TsMuxPacketInfo objects for the same PID
with user-provided sections, for example ATSC requires multiple
tables with the same PID.
This might be necessary temporarily for changing the previous settings.
Make it an actual error if the settings are like this while processing a
buffer.
It is parsing frame data and so should check the data size against the
frame header size instead of the file header size. If don't, it is
possible to drop the last frame because IVF_FILE_HEADER_SIZE is greater
than IVF_FRAME_HEADER_SIZE
This patchs add support for configuring the bonding method used. There is
two method specified
- redundant: All the RTP packets are replicated
- combined: RTP packet are evenly distributed over each links
Additionally, an application can set the "dispatcher" property in order
to implement custom dispatching method. Whenever the "dispatcher"
property is set, "bonding-method" property will be ignored.
As we can now have multiple sessions, stats need to be implemented per
session. This follow RTPSession model with sources. The stats are now:
dropped: 0
received: 0
recovered: 0
permanently-lost: 0
duplicates: 0
retransmission-requests-sent: 0
rtx-roundtrip-time: 0
session-stats:
session-id=0
rtp-from=""
rtcp-from=""
dropped=0
received=0
session-id=1
rtp-from=""
rtcp-from=""
dropped=0
received=0
. . .
session-stats is a GValueArray as there is no better alternatives.
As we can now have multiple sessions, stats need to be implemented per
session. This follow RTPSession model with sources. The stats are now:
sent-original-packets: 0
sent-retransmitted-packets: 0
session-stats:
session-id=0
sent-original-packets=0
sent-retransmitted-packets=0
round-trip-time=0
session-id=1
sent-original-packets=0
sent-retransmitted-packets=0
round-trip-time=0
. . .
session-stats is a GValueArray as there is no better alternatives.
gstmpegtsmux.c:291:3: error: implicit declaration of function ‘memmove’ [-Werror=implicit-function-declaration]
memmove (map.data + 4, map.data, map.size - 4);
^
gstmpegtsmux.c:291:3: error: incompatible implicit declaration of built-in function ‘memmove’ [-Werror]
gstmpegtsmux.c:291:3: note: include ‘<string.h>’ or provide a declaration of ‘memmove’
... and set to caps if necessary.
Note 1) the mastering display info and content light level SEI meessages
are persistent in the corresponding codec video sequence (i.e., GOP).
So any bitstream containing those SEI messages
(and also all pictures are intended to be HDR rendered) should be ensured that
each first slice of codec video sequence follows those SEI messages.
Note 2) The codec video sequence is a group an [IRAP + NoRaslOutputFlag == 1]
and following AUs which are not [IRAP + NoRaslOutputFlag == 1]
The NoRaslOutputFlag is equal to 1 for each IDR AU, BLA AU and some CRA AU.
For a CRA AU to have NoRaslOutputFlag equal to 1, following condition should required.
* When the CRA AU is the first AU in the bitstream in decoding order
* or the CRA AU is the first AU that follows an end of sequence NAL in decoding order
* or the HandleCraAsBlaFlag equal to 1.
Due to the limited context in parse element, in this commint, CRA AU will not considered as
having the NoRaslOutputFlag equal to 1. Therefore, in the worst case,
mastering-display-info and content-light-level could be cleared one GOP after
when stream was chagned from HDR to SDR.
RIST TR-06-1 is a specification for video streaming made by the VSF
group. It is using a subset of RTP specification to which some
modification has been made to improve RTX behaviour and avoid any need
for signaling. The plugin implement ristrtxsend / ristrtxreceive element
which are the RIST specific equivalent of rtprtxsend/rtprtxreceive and
ristsink / ristsrc which implement rist transmitter and receiver. The
RIST protocol is meant to be used in unidirectional way. Typically, MPEG
TS over RTP is used.
Currently we support unicast and multicast streaming according to the
specification. This patch does not include any bonding support yet. The
ristsrc element introduce rist:// URI handling in parallel to it's
property configuration interface.
Expose SEI data in the H.264 bitstream parser API and
extract closed captions and other things that are not
specified in the H.264 spec itself in the videoparser.
Based on patch by: Mathieu Duponchelle <mathieu@centricular.com>
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/940
when computing timecode metas. Depending on the value of that flag,
n_frames is to be interpreted as a number of fields or a number of
frames. As GstVideoTimeCodeMeta always deals with frames, we want
to scale that number when needed.
This debug code will help determine why certain instances of closed
captions that are present in the Picture User Data are not actually
processed by the pipeline
In 7c767f3fcd , stream creation was
refactored to occur before potential program creation. This created
issues with pipelines such as:
gst-launch-1.0 videotestsrc ! video/x-raw, format=I420, width=640, height=640, framerate=25/1 ! \
x264enc ! hlssink2 target-duration=1
eg.: gst_buffer_copy_into: assertion 'bufsize >= offset + size' failed
As this reordering was actually not needed for the purpose of allowing
to specify a PCR stream, this reverts the reordering part of the
initial commit.
The MPEG-TS packetiser should use the upstream DTS for
skew correction when running in that mode, as the DTS
carries the upstream arrival time. The PTS (if it's
set at all) is less useful, and can be invalid.
gstladspa.c:360:5: error: zero-length ms_printf format string [-Werror=format-zero-length]
vad_private.c:108:3: error: this decimal constant is unsigned only in ISO C90 [-Werror]
gstdecklinkvideosink.cpp:478:32: error: comparison between 'BMDTimecodeFormat {aka enum _BMDTimecodeFormat}' and 'enum GstDecklinkTimecodeFormat' [-Werror=enum-compare]
win/DeckLinkAPI_i.c:72:8: error: extra tokens at end of #endif directive [-Werror]
win/DeckLinkAPIDispatch.cpp:35:10: error: unused variable 'res' [-Werror=unused-variable]
gstwasapiutil.c:733:3: error: format '%x' expects argument of type 'unsigned int', but argument 8 has type 'DWORD' [-Werror=format]
gstwasapiutil.c:733:3: error: format '%x' expects argument of type 'unsigned int', but argument 9 has type 'guint64' [-Werror=format]
kshelpers.c:446:3: error: missing braces around initializer [-Werror=missing-braces]
kshelpers.c:446:3: error: (near initialization for 'known_property_sets[0].guid.Data4') [-Werror=missing-braces]
The way FlowCombiner combines the FLUSH doesn't work in the case
we have several "sinkpads" since any flush return FLUSH. But in the
case we have a seek where on one branch flush is done, we should
just say OK otherwise we might return FLUSHING to a src that has already
been seeked and is ready to process new buffers
/usr/bin/ld: .libs/libgstremovesilence_la-vad_private.o: in function `vad_set_threshold':
./gst/removesilence/vad_private.c:108: undefined reference to `pow'
/usr/bin/ld: .libs/libgstremovesilence_la-vad_private.o: in function `vad_get_threshold_as_db':
./gst/removesilence/vad_private.c:114: undefined reference to `log10'
vps/sps/pps in codec_data shouldn't be considered as inband data.
Otherwise, h26{4,5}parse never insert them to nal when transform
(packetized to byte-stream) use case
Similar change as the on I did in h264parse. We want to process SEI
recovery point as keyframe so muxers will mark them as seek points and
decoders will be able to start decoding from them rather than waiting
for an IDR.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/790
The spec states that "recovery point SEI message assists a decoder in
determining when the decoding process will produce acceptable
pictures for display after the decoder initiates random access or after the
encoder indicates a broken link in the coded video sequence."
Mark those as keyframes so muxers will mark them as seek points and
decoders will be able to start decoding from them rather than waiting
for an IDR.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/790
This removes the crossfade-ratio property and replaces it with an
operator property. Currently this implements the following operators:
- SOURCE: Copy over the source and don't look at the destination
- OVER: Default blending of the source over the destination
- ADD: Like OVER but simply adding the alpha instead
See the example for how to implement crossfading with this.
https://bugzilla.gnome.org/show_bug.cgi?id=797169
If the first audio buffer to be dropped started right between two video
buffers (after the end of the first but before the start of the second,
as is often the case with N/1001 video frame rates), we would miss
sending the dropping=true message.
https://bugzilla.gnome.org/show_bug.cgi?id=797248
tsdemux expects a custom descriptor (GST_MTS_DESC_AC3_AUDIO_STREAM)
to detect a stream as AC3 and not EAC3.
Note that tsdemux expects this descriptor because mpegtsmux writes
a stream with a HDMV registration descriptor.
Fixes:
gst-launch-1.0 -v audiotestsrc ! avenc_ac3 ! ac3parse ! mpegtsmux ! \
tsdemux ! ac3parse ! avdec_ac3 ! audioconvert ! autoaudiosink
https://bugzilla.gnome.org/show_bug.cgi?id=797220
Previously it was dispatched before the last video buffer, and audio
buffers would follow afterwards. It's misleading to send the
dropping=true message before both streams have really stopped, it can
lead to races when someone is e.g. waiting for that message to send EOS.
Also added some debug output.
https://bugzilla.gnome.org/show_bug.cgi?id=797145
Direct applying the commit 7bb6443. This could fix also unexpected
nal dropping when nonzero "config-interval" is set.
(e.g., gst-launch-1.0 videotestsrc ! x265enc key-int-max=30 !
h265parse config-interval=30 ! avdec_h265 ! videoconvert ! autovideosink)
Similar to the h264parse, have_{vps,sps,pps} variables will be used
for deciding on when to submit updated caps or not, and rather mean
"have new SPS/PPS to be submitted?"
See also https://bugzilla.gnome.org/show_bug.cgi?id=732203https://bugzilla.gnome.org/show_bug.cgi?id=754124
If we drain after a discont, the discont time given by the stream
synchronizer is already the time after the discontinuity. But we need to
drain all pending data based on the previous discont time instead.
The case is properly handled a few lines below by dropping the buffer.
We shouldn't perpetually block the audio chain function until the
target-timecode is reached.
https://bugzilla.gnome.org/show_bug.cgi?id=796906
This change allow setting timestamp on streams that would otherwise have
no timestamp. This is useful to make a video from bunch of JPEG files. An
example of such pipeline would be:
gst-launch-1.0 multifilesrc location=%05d.jpeg caps=image/jpeg,framerate=30/1 \
! jpegparse ! fakesink silent=0 -v
It works like a valve in front of the actual avwait. When recording ==
TRUE, other rules are then examined. When recording == FALSE, nothing is
passing through.
https://bugzilla.gnome.org/show_bug.cgi?id=796836
255 will easily become 0 in the blending function as they expect
the maximum value to be 255.
Can be reproduce with
gst-launch-1.0 videotestsrc pattern=ball ! c.sink_0 \
videotestsrc pattern=snow ! c.sink_1 \
compositor name=c \
sink_0::zorder=0 sink_1::zorder=1 sink_0::crossfade-ratio=0.5 \
background=black ! \
videoconvert ! xvimagesink
crossfade-ratio +/- 0.001 makes it work correctly and the same happens
at e.g. 0.25, 0.75, N*0.0625
https://bugzilla.gnome.org/show_bug.cgi?id=796846
Adds AV01 FOURCC to the list of allowed media files, in order to allow
parsing the IVF Container holding AV1 content.
At a later point dynamic resolution change can be supported - therefore
the sequence header OBU and frame header OBU of AV1 file must be parsed,
which can be done in future with the help of gst-lib gstav1parse.
https://bugzilla.gnome.org/show_bug.cgi?id=796677
This moves all the conversion related code to a single place, allows
less code-duplication inside compositor and makes the glmixer code less
awkward. It's also the same pattern as used by GstAudioAggregator.
The aggregated_frame is now called prepared_frame and passed to the
prepare_frame and cleanup_frame virtual methods directly. For the
currently queued buffer there is a method on the video aggregator pad
now.
Unless we only have sparse streams. In this case we will consider them.
It fixes a bug happening when first observed timestamp comes from a
sparse stream and other streams don't have a valid timestamp, yet. Thus
leading the timestamp from sparse stream to be the start of the
following segment. In this case, if the timestamp is really bigger than
non-sparse stream (audio/video), it will lead the pipeline to clip
samples from the non-parse stream.
https://bugzilla.gnome.org/show_bug.cgi?id=744469
Scene detection determines, how many scenes have changed in a video.
It compared the previous frame with present frame to find out the score and a
threshold is calculated for the same.
I have added an intermediate condition which helps in improving the positive
detections.
https://bugzilla.gnome.org/show_bug.cgi?id=735094
We were assuming that NULL pool meant that downstream didn't reply.
Update the pool index 0 instead of adding at the end. Otherwise we ended
up letting basesrc decide, which would pick the blocksize as a size
(4096) instead of the image size.
https://bugzilla.gnome.org/show_bug.cgi?id=795327
This server uses an unknown 003.889 protocol version. This patch fixes
the version validation in order to simply fallback to 3.3 as suggested
by the spec.
We would mark all streams with FLAG_UNSELECT as we would check
the pointer for non-NULLness not the dereferenced stream number
(and the pointer is always non-NULL). The intention here was
presumably to mark the first stream of each type as SELECT and
the others as UNSELECT by default.
CID 1434970.
This is a simple Bin that will expose audiotestsrc or videotestsrc
based on what is asked by the user either through the GstURIHandler
API or through the "stream-types" property.
This element also provides GstStream and GstStreamCollection
so it is nicely usable from playbin3.
https://bugzilla.gnome.org/show_bug.cgi?id=795366
pcapparse cannot parse fragmented IP packets correctly, in particular it
will get confused when trying to parsing fragments as standalone frames
in two ways:
1. the first fragment will have the packet length greater than the
frame size and will always be discarded;
2. fragments with non-zero offsets will be interpreted as full packets
and the first part of their raw payload data will be parsed as the
transport protocol header, resulting in bogus values for addresses
and ports, thus evading the properties filtering on those values.
This can make it difficult for users to see why the data does not get
downstream.
So be more explicit and just bail out when fragmented packets are
encountered.
https://bugzilla.gnome.org/show_bug.cgi?id=795284
If the 'enable-last-sample' property is enabled, fakevideosink will keep
a reference on last rendered buffer which may lead to buffer starvation
in the pipeline.
Request one extra buffer in this case so we always have a buffer flying
in the pipeline.
https://bugzilla.gnome.org/show_bug.cgi?id=795109
When starting up we need to initialise things *before*
streaming starts, so before we chain up to the parent
class in the state change function. And when we shut
down the element, we need to reset things after streaming
has stopped, so after we chain up to the parent class
in the state change function.
Possibly related to memory leak in:
https://bugzilla.gnome.org/show_bug.cgi?id=794353
We used to have the same enum to represent H265 profiles and idc values.
Those are no longer the same with extension profiles defined from
version 2 of the spec.
Split those enums so the semantic of each is clearer and we'll be able
to add extension profiles to GstH265Profile.
Also add gst_h265_profile_tier_level_get_profile() to retrieve the
GstH265Profile from the GstH265ProfileTierLevel. It will be used to
implement the detection of extension profiles.
https://bugzilla.gnome.org/show_bug.cgi?id=793876
Measures the audio latency between the source pad and the sink pad by
outputting period ticks on the source pad and measuring how long they
take to arrive on the sink pad.
Very useful for quantifying latency improvements in audio pipelines.
This plugin was particularly useful during development of the
low-latency features of the wasapi plugin.
https://bugzilla.gnome.org/show_bug.cgi?id=793839
This is a wrapper around fakesink that will advertise GstVideoMeta
and other meta API in order to achieve zero-copy whenever possible.
his new element is useful when doing performance testing with
video stream and don't want the sink capability to change the
upstream behaviour.
https://bugzilla.gnome.org/show_bug.cgi?id=793624
The pnmenc was not mapping the input buffers as video buffers. Because
of this, the video frame stride was not being set based on frame but
based on the caps, which make the assumption that the strides are a
power of 4. For input that is not a power of 4, this would lead to a
SIGSEGV.
https://bugzilla.gnome.org/show_bug.cgi?id=793419
The inter plugin originated in 0.10, which had only one timestamp. As a
result, during the port to 1.0, the DTS were left undefined. This can cause
subtle bugs with basesrc, which can end up incorrectly picking DTS over PTS
and producing output buffers with incorrect timestamps.
https://bugzilla.gnome.org/show_bug.cgi?id=791347
This keep-it-simple plugin is useful when you want to pipe arbitrary
data to a different pipeline within the same process. Some advantages
over appsink/appsrc, the inter elements, etc:
* Ease of use. Buffers, events, and caps are transmitted as-is without
copying or serialization.
* Enables zerocopy (especially DMABUF) transparently without any
special-casing.
* Enables usage with sinks or elements that are unreliable and may
throw errors and need re-initialization, such as a network sink, a
USB device sink (v4l2), etc.
* Transmits arbitrary data, not just audio/video/subs
* Can easily implement 1 producer pipeline -> N dynamic consumer
pipelines within a single process when combined with the `tee`
element.
All queries, events, buffers, and buffer lists are proxied. State
changes, clocks, and base times for the two pipelines are independent
since the upstream and downstreams continue to be different pipelines.
https://bugzilla.gnome.org/show_bug.cgi?id=788200
gdpdepay element uses the decide_allocation to fetch the downstream
allocator. Nonetheless it is possible that allocate uses a custom
alloc function, which is not usable by gdpdepay, crashing later the
application when the allocater buffer is NULL.
This patch checks for the allocator flags and reset it if the
allocator has a custom alloc function.
https://bugzilla.gnome.org/show_bug.cgi?id=789476
When querying downstream for allocation, and the source caps hasn't
set its caps, using ANY by default, it raises a critical message in
console:
CRITICAL **: gst_video_info_from_caps: assertion 'gst_caps_is_fixed (caps)' failed
This patch bails out decide_allocation() if the caps aren't fixed.
https://bugzilla.gnome.org/show_bug.cgi?id=789476
This information could be used for example to pick a decoder supporting
a specific chroma and/or bit depth, like 4:2:2 10 bits.
It can also be used to inform earlier decoder about the format it is
about to decode.
https://bugzilla.gnome.org/show_bug.cgi?id=792039
This fixes issues where wavparse would query the file size upstream
and assert because the file size is way smaller then what the WAVE
header says. This patch disable or cane a handful of queries that
make no sense to forward.
https://bugzilla.gnome.org/show_bug.cgi?id=791811
This plugin is useful when you want to pipe arbitrary data to
a different pipeline within the same process. Buffers, events, and caps
are transmitted as-is without copying or manipulation.
"avwait-status" is posted when avwait starts or stops passing through
data (e.g. because target-timecode and end-timecode respectively have
been reached). The attached structure includes a "dropping" boolean (set
to TRUE if we are currently dropping data, FALSE otherwise), and a
"running-time" GST_CLOCK_TIME which contains the running time of the
change.
https://bugzilla.gnome.org/show_bug.cgi?id=790170
Reordering of packets is not very common in networks, and the delay
functions will always introduce reordering if delay > packet-spacing,
so by setting allow-reordering to FALSE you guarantee that the packets
are in order, while at the same time introducing delay/jitter to them.
By using the property "delay-distribution" the user can control how the
delay applied to delayed packets is distributed. This is either the
uniform distribution (as before) or the normal distribution.
"min-delay" and "max-delay" control both distributions. For the normal
distribution it defines the bounds of the 95% confidence interval.
When input is not in byte-stream format there is no need to wait for the first
buffer before setting src caps. We already have all the information from the
input codec_data.
This allow us to already configure downstream elements allowing them,
for example, to already allocate their internal buffers as they know
the format of the input they are about to receive.
Same change as the one I just did in h264parse.
https://bugzilla.gnome.org/show_bug.cgi?id=790709
When input is in AVC format there is no need to wait for the first buffer
before setting src caps. We already have all the information from the
input codec_data.
This allow us to already configure downstream elements allowing them,
for example, to already allocate their internal buffers as they know
the format of the input they are about to receive.
https://bugzilla.gnome.org/show_bug.cgi?id=790709
Try prioritizing downstream's caps over upstream's if possible so the
parser can configured in "passthrough" if possible and save it from
doing useless conversions.
Exact same change as the one I just did in h264parse.
https://bugzilla.gnome.org/show_bug.cgi?id=790628
Try prioritizing downstream's caps over upstream's if possible so the
parser can configured in "passthrough" if possible and save it from
doing useless conversions.
https://bugzilla.gnome.org/show_bug.cgi?id=790628
A deserialised timecode has a framerate of 0/1 by default. That breaks
it when comparing the frames field with another timecode (incoming from
the frame). We were setting the framerate when receiving the caps event,
but not when setting the timecode in set_property, so it was broken for
timecodes set after the caps event.
Also checking if the fps_n we got from the caps event is != 0 before
setting it - also at the caps event.
https://bugzilla.gnome.org/show_bug.cgi?id=790334
Now that timecodes support proper serialisation / deserialisation, a
timecode might have an invalid fps_n / fps_d even without using the
target-time-code-string property. Detect those cases and set fps_n/fps_d
properly.
If end_tc is NULL, it means that we don't want avwait to stop at any
timecode. When explicitly setting end_tc to NULL, there is no point in
comparing end_tc with start_tc (to see if we'll reject end_tc for being
before start_tc), so the check in question is completely disabled
instead of letting it crash.
Add support for parsing linear time code from
an audio source using libltc
https://github.com/x42/libltc
The user can now choose between 3 different and independently
running timecode sources. The old override-existing property
has been replaced by timecode-source.
https://bugzilla.gnome.org/show_bug.cgi?id=784295
This element can be configured to add jitter and/or drift to incoming
buffers' PTS, DTS, or both. Amplitude and average of jitter and drift
are configurable.
https://bugzilla.gnome.org/show_bug.cgi?id=787358
avwait can now be configured to stop when a given timecode has been
reached. It will start at the timecode indicated with start-timecode and
end at the timecode indicated with end-timecode. If end-timecode is
NULL (default), the previous functionality is preserved: keep going and
not end.
https://bugzilla.gnome.org/show_bug.cgi?id=789403
* Avoid copying the pending data and instead create a buffer directly from
that data with the appropriate offset.
* Locate the jp2k magic to determine the exact location of the (first) frame
data instead of assuming that the header is of an expected size
https://bugzilla.gnome.org/show_bug.cgi?id=786111
The jp2k specification (ITU-T T.800) specifies that the 'brat' box
has two fields and the second one (AUF2) can be set to 0 for progressive
streams.
The problem is that the mpeg-ts specification (ITU-T H.222.0 06/2012)
says that the AUF2 field is only present if the stream is interlaced
In order to cope with both situation, accept those next 32bit if the
stream is marked as progressive and those bits contain 0
https://bugzilla.gnome.org/show_bug.cgi?id=786111
Crossfading is a bit more complex than just having two pads with the
right keyframes as the blending is not exactly the same.
The difference is in the way we compute the alpha channel, in the case
of crossfading, we have to compute an additive operation between
the destination and the source (factored by the alpha property of both
the input pad alpha property and the crossfading ratio) basically so
that the crossfade result of 2 opaque frames is also fully opaque at any
time in the crossfading process, avoid bleeding through the layer
blending.
Some rationnal can be found in https://phabricator.freedesktop.org/T7773.
https://bugzilla.gnome.org/show_bug.cgi?id=784827
These elements allow splitting a pipeline across several processes,
with communication done by the ipcpipelinesink and ipcpipelinesrc
elements. The main use case is to split a playback pipeline into
a process that runs networking, parser & demuxer and another process
that runs the decoder & sink, for security reasons.
https://bugzilla.gnome.org/show_bug.cgi?id=752214
This allows us to know exactly where in the material track we are, and
how to convert from a PTS for a source track to the actual PTS of the
material track (i.e. by adding the component start position).
https://bugzilla.gnome.org/show_bug.cgi?id=785119
While the size in the packet is only 16 bits, we need to handle bigger
sizes without overflowing. For video streams this can happen, 0 is
written to the stream instead.
This fixes muxing of buffers >= 2**16.
In this case, we assume that the format is jpc, and we infer the color
space from the number of components. This allows the parser to process a
jpc disk file coming from a filesrc element.
https://bugzilla.gnome.org/show_bug.cgi?id=783291
It is only relevant in deciding whether or not send SEGMENT_DONE.
In this case, not detecting EOS leads to a busy loop when encountering
the originally recorded end-of-file of a file that is still growing.
While only filler packets should be allowed, for good measure also skip
any other KLV packets in the range where there could be index table
segments.
This fixes parsing of partitions with multiple index table segments,
which are separated by a filler packet, or other packets.
This is needed to know the PTS, without that we only know the DTS and
using that also for the PTS is wrong unless we have an intra-only codec.
If we can't get the temporal reordering from the index table, don't set
any PTS for non-intra-only codecs and let decoders figure out something.
https://bugzilla.gnome.org/show_bug.cgi?id=784027
When retrieving the `mxfdemux.structure` property, it leads to an
assertion as metadatas need to be resolved for the call to
mxf_metadata_base_to_structure to be valid.
The RSIZ capabilities tag stores the JPEG 2000 profile. In the case of
broadcast profiles, it also stores the broadcast main level, which
specifies the bit rate.
https://bugzilla.gnome.org/show_bug.cgi?id=782337
Also swap the linktype after we detected that we need to do
byteswapping. Fixes a problem with reading pcap files generated
on a machine with different endianness.
When caps changes while streaming, the new caps was getting processed
immediately in videoaggregator, but the next buffer in the queue that
corresponds to this new caps was not necessarily being used immediately,
which resulted sometimes in using an old buffer with new caps. Of course
there used to be a separate buffer_vinfo for mapping the buffer with its
own caps, but in compositor the GstVideoConverter was still using wrong
info and resulted in invalid reads and corrupt output.
This approach here is more safe. We delay using the new caps
until we actually select the next buffer in the queue for use.
This way we also eliminate the need for buffer_vinfo, since the
pad->info is always in sync with the format of the selected buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=780682
When there are more than 64 channels, we don't want to exceed the
bounds of the ordering_map buffer, and in these cases we don't want to
remap at all. Here we avoid doing that.
Based on a patch originally for plugins-good/interleave in
https://bugzilla.gnome.org/show_bug.cgi?id=780331
This duplicated property is no longer needed as there is now API to
allow bindings access GST_TYPE_ARRAY (see gst_util_get/set/object_array).
Additionnally, Python has proper overrides which will make this looks
like Python. A 2x2 matrix would be set this way:
element = matrix = Gst.ValueArray(Gst.ValueArray([1.0, -1.0]),
Gst.ValueArray([1.0, -1.0))
Notice that you need to "cast" each arrays to Gst.ValueArray, otherwise
there is an ambiguity between Gst.ValueArray and Gst.ValueList list type.
Fortunatly, Gst.ValueArray implements the Sequence interface, so it can
be indexed like normal python matrix.