This information could be used for example to pick a decoder supporting
a specific chroma and/or bit depth, like 4:2:2 10 bits.
It can also be used to inform earlier decoder about the format it is
about to decode.
https://bugzilla.gnome.org/show_bug.cgi?id=792039
This fixes issues where wavparse would query the file size upstream
and assert because the file size is way smaller then what the WAVE
header says. This patch disable or cane a handful of queries that
make no sense to forward.
https://bugzilla.gnome.org/show_bug.cgi?id=791811
This plugin is useful when you want to pipe arbitrary data to
a different pipeline within the same process. Buffers, events, and caps
are transmitted as-is without copying or manipulation.
"avwait-status" is posted when avwait starts or stops passing through
data (e.g. because target-timecode and end-timecode respectively have
been reached). The attached structure includes a "dropping" boolean (set
to TRUE if we are currently dropping data, FALSE otherwise), and a
"running-time" GST_CLOCK_TIME which contains the running time of the
change.
https://bugzilla.gnome.org/show_bug.cgi?id=790170
Reordering of packets is not very common in networks, and the delay
functions will always introduce reordering if delay > packet-spacing,
so by setting allow-reordering to FALSE you guarantee that the packets
are in order, while at the same time introducing delay/jitter to them.
By using the property "delay-distribution" the user can control how the
delay applied to delayed packets is distributed. This is either the
uniform distribution (as before) or the normal distribution.
"min-delay" and "max-delay" control both distributions. For the normal
distribution it defines the bounds of the 95% confidence interval.
When input is not in byte-stream format there is no need to wait for the first
buffer before setting src caps. We already have all the information from the
input codec_data.
This allow us to already configure downstream elements allowing them,
for example, to already allocate their internal buffers as they know
the format of the input they are about to receive.
Same change as the one I just did in h264parse.
https://bugzilla.gnome.org/show_bug.cgi?id=790709
When input is in AVC format there is no need to wait for the first buffer
before setting src caps. We already have all the information from the
input codec_data.
This allow us to already configure downstream elements allowing them,
for example, to already allocate their internal buffers as they know
the format of the input they are about to receive.
https://bugzilla.gnome.org/show_bug.cgi?id=790709
Try prioritizing downstream's caps over upstream's if possible so the
parser can configured in "passthrough" if possible and save it from
doing useless conversions.
Exact same change as the one I just did in h264parse.
https://bugzilla.gnome.org/show_bug.cgi?id=790628
Try prioritizing downstream's caps over upstream's if possible so the
parser can configured in "passthrough" if possible and save it from
doing useless conversions.
https://bugzilla.gnome.org/show_bug.cgi?id=790628
A deserialised timecode has a framerate of 0/1 by default. That breaks
it when comparing the frames field with another timecode (incoming from
the frame). We were setting the framerate when receiving the caps event,
but not when setting the timecode in set_property, so it was broken for
timecodes set after the caps event.
Also checking if the fps_n we got from the caps event is != 0 before
setting it - also at the caps event.
https://bugzilla.gnome.org/show_bug.cgi?id=790334
Now that timecodes support proper serialisation / deserialisation, a
timecode might have an invalid fps_n / fps_d even without using the
target-time-code-string property. Detect those cases and set fps_n/fps_d
properly.
If end_tc is NULL, it means that we don't want avwait to stop at any
timecode. When explicitly setting end_tc to NULL, there is no point in
comparing end_tc with start_tc (to see if we'll reject end_tc for being
before start_tc), so the check in question is completely disabled
instead of letting it crash.
Add support for parsing linear time code from
an audio source using libltc
https://github.com/x42/libltc
The user can now choose between 3 different and independently
running timecode sources. The old override-existing property
has been replaced by timecode-source.
https://bugzilla.gnome.org/show_bug.cgi?id=784295
This element can be configured to add jitter and/or drift to incoming
buffers' PTS, DTS, or both. Amplitude and average of jitter and drift
are configurable.
https://bugzilla.gnome.org/show_bug.cgi?id=787358
avwait can now be configured to stop when a given timecode has been
reached. It will start at the timecode indicated with start-timecode and
end at the timecode indicated with end-timecode. If end-timecode is
NULL (default), the previous functionality is preserved: keep going and
not end.
https://bugzilla.gnome.org/show_bug.cgi?id=789403
* Avoid copying the pending data and instead create a buffer directly from
that data with the appropriate offset.
* Locate the jp2k magic to determine the exact location of the (first) frame
data instead of assuming that the header is of an expected size
https://bugzilla.gnome.org/show_bug.cgi?id=786111
The jp2k specification (ITU-T T.800) specifies that the 'brat' box
has two fields and the second one (AUF2) can be set to 0 for progressive
streams.
The problem is that the mpeg-ts specification (ITU-T H.222.0 06/2012)
says that the AUF2 field is only present if the stream is interlaced
In order to cope with both situation, accept those next 32bit if the
stream is marked as progressive and those bits contain 0
https://bugzilla.gnome.org/show_bug.cgi?id=786111
Crossfading is a bit more complex than just having two pads with the
right keyframes as the blending is not exactly the same.
The difference is in the way we compute the alpha channel, in the case
of crossfading, we have to compute an additive operation between
the destination and the source (factored by the alpha property of both
the input pad alpha property and the crossfading ratio) basically so
that the crossfade result of 2 opaque frames is also fully opaque at any
time in the crossfading process, avoid bleeding through the layer
blending.
Some rationnal can be found in https://phabricator.freedesktop.org/T7773.
https://bugzilla.gnome.org/show_bug.cgi?id=784827
These elements allow splitting a pipeline across several processes,
with communication done by the ipcpipelinesink and ipcpipelinesrc
elements. The main use case is to split a playback pipeline into
a process that runs networking, parser & demuxer and another process
that runs the decoder & sink, for security reasons.
https://bugzilla.gnome.org/show_bug.cgi?id=752214
This allows us to know exactly where in the material track we are, and
how to convert from a PTS for a source track to the actual PTS of the
material track (i.e. by adding the component start position).
https://bugzilla.gnome.org/show_bug.cgi?id=785119
While the size in the packet is only 16 bits, we need to handle bigger
sizes without overflowing. For video streams this can happen, 0 is
written to the stream instead.
This fixes muxing of buffers >= 2**16.
In this case, we assume that the format is jpc, and we infer the color
space from the number of components. This allows the parser to process a
jpc disk file coming from a filesrc element.
https://bugzilla.gnome.org/show_bug.cgi?id=783291
It is only relevant in deciding whether or not send SEGMENT_DONE.
In this case, not detecting EOS leads to a busy loop when encountering
the originally recorded end-of-file of a file that is still growing.
While only filler packets should be allowed, for good measure also skip
any other KLV packets in the range where there could be index table
segments.
This fixes parsing of partitions with multiple index table segments,
which are separated by a filler packet, or other packets.
This is needed to know the PTS, without that we only know the DTS and
using that also for the PTS is wrong unless we have an intra-only codec.
If we can't get the temporal reordering from the index table, don't set
any PTS for non-intra-only codecs and let decoders figure out something.
https://bugzilla.gnome.org/show_bug.cgi?id=784027
When retrieving the `mxfdemux.structure` property, it leads to an
assertion as metadatas need to be resolved for the call to
mxf_metadata_base_to_structure to be valid.
The RSIZ capabilities tag stores the JPEG 2000 profile. In the case of
broadcast profiles, it also stores the broadcast main level, which
specifies the bit rate.
https://bugzilla.gnome.org/show_bug.cgi?id=782337
Also swap the linktype after we detected that we need to do
byteswapping. Fixes a problem with reading pcap files generated
on a machine with different endianness.
When caps changes while streaming, the new caps was getting processed
immediately in videoaggregator, but the next buffer in the queue that
corresponds to this new caps was not necessarily being used immediately,
which resulted sometimes in using an old buffer with new caps. Of course
there used to be a separate buffer_vinfo for mapping the buffer with its
own caps, but in compositor the GstVideoConverter was still using wrong
info and resulted in invalid reads and corrupt output.
This approach here is more safe. We delay using the new caps
until we actually select the next buffer in the queue for use.
This way we also eliminate the need for buffer_vinfo, since the
pad->info is always in sync with the format of the selected buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=780682
When there are more than 64 channels, we don't want to exceed the
bounds of the ordering_map buffer, and in these cases we don't want to
remap at all. Here we avoid doing that.
Based on a patch originally for plugins-good/interleave in
https://bugzilla.gnome.org/show_bug.cgi?id=780331
This duplicated property is no longer needed as there is now API to
allow bindings access GST_TYPE_ARRAY (see gst_util_get/set/object_array).
Additionnally, Python has proper overrides which will make this looks
like Python. A 2x2 matrix would be set this way:
element = matrix = Gst.ValueArray(Gst.ValueArray([1.0, -1.0]),
Gst.ValueArray([1.0, -1.0))
Notice that you need to "cast" each arrays to Gst.ValueArray, otherwise
there is an ambiguity between Gst.ValueArray and Gst.ValueList list type.
Fortunatly, Gst.ValueArray implements the Sequence interface, so it can
be indexed like normal python matrix.
Inserts AU delimeter by default if missing au delimeter from upstream.
This should be done only in case of byte-stream format.
Note that:
We have to compensate for the new bytes added for the AU, otherwise
insertion of PPS/SPS will use wrong offsets and overwrite wrong data.
Also mark the AU delimiter blob const, and use frame->out_buffer for
storing the output to keep baseparse assumptions valid.
Original-Patch-By: Michal Lazo <michal.lazo@mdragon.org>
Helped by Sebastian Dröge <sebastian@centricular.com>
https://bugzilla.gnome.org/show_bug.cgi?id=736213
Those are the rules:
In the SPS:
* if frame_mbs_only_flag=1 => all frame progressive
* if frame_mbs_only_flag=0 => field_pic_flag defines if each frame is
progressive or interlaced, thus the mode is 'mixed' in GStreamer
terms.
https://bugzilla.gnome.org/show_bug.cgi?id=779309
Doing lazy conversion of PCR values doesn't work right
when a PCR discont is encountered. Instead, convert PCR
values to the continuous timestamp domain as soon as we
encounter them and store that instead.
This element transforms a given number of input channels into a given number of
output channels according to a given transformation matrix. The matrix
coefficients must be between -1 and 1. In the auto mode, input/output channels
are automatically negotiated and the transformation matrix is a truncated or
zero-padded identity matrix.
https://bugzilla.gnome.org/show_bug.cgi?id=777376
See https://bugzilla.gnome.org/show_bug.cgi?id=773666
This would ideally be solved in baseparse but that requires further
thought at this point, and in the meantime it would be good to have
rawbaseparse not assert on this but handle it gracefully instead.
DVDs always have subpictures that start on an even Y
coordinate, but gstspu does more generic vobsubs these
days, so handle ones that start on an odd vertical position.
https://bugzilla.gnome.org/show_bug.cgi?id=777400
timecodestamper will post an element message which contains the current
timecode it just stamped. If a timecode was already found and not
replaced, it will still post it in a message.
https://bugzilla.gnome.org/show_bug.cgi?id=777048
... rather than when determining when to end the frame.
The opportunity to do so might not come when forced to drain,
and it seems nicer anyway to do so at parse wrapup time.
This happens if we had no CAPS event yet but e.g. got an EOS event. We
would then try to output a 0-sized buffer, but getting that from the
adapter will give an assertion, return NULL and then crash.
If they were not ported after 4+ years it seems unlikely that anybody is
ever going to need them again. They're still in the GIT history if
needed.
https://bugzilla.gnome.org/show_bug.cgi?id=774530
Compositor does not support it currently and it needs special support
for handling this correctly, and is rather non-trivial to implement for
all formats.
For frame->buffer, baseparse is doing that automatically for us. For
frame->output_buffer it doesn't and assumes that the subclass is already
doing that. Consistency!
This is useful e.g. if audio buffers should be exactly the duration of a
video frame, or if a audio buffers should never be too large because of
latency constraints.
The element is taking a fractional buffer duration, to allow working
with e.g. 1001/30000 as output duration and it accumulates rounding
errors in the buffer durations and compensates for them by making some
buffers one sample larger than the others.
https://bugzilla.gnome.org/show_bug.cgi?id=774689
We will allocate a screen area of width*height*bpp bytes, however this
calculation can easily overflow if too high width or height are given
inside the stream. Nonetheless we would just assume that enough memory
was allocated, try to fill it and overwrite as much memory as wanted.
Also allocate the screen area filled with zeroes to ensure that we start
with full-black and not any random (or not so random) data.
https://scarybeastsecurity.blogspot.gr/2016/11/0day-poc-risky-design-decisions-in.html
Ideally we should just remove this plugin in favour of the one in
gst-libav, which generally seems to be of better code quality.
https://bugzilla.gnome.org/show_bug.cgi?id=774533
Type cast has higher precedence than bitwise shift, so the third
argument will truncate to 8 bits and then shift right by 8 bits
resulting in constant zero.
https://bugzilla.gnome.org/show_bug.cgi?id=774293
Consistently use GST_ROUND_UP_4(width) as stride for
bayer buffers. Bayer data will usually come in widths
that are multiples of 4 anyway, so hopefully this
should not have any adverse impact on anyone in
practice.
Before, bayer2rgb required input buffers to are sized
accordingly, but then didn't actually round up when
calculating row offsets. rgb2bayer didn't use a rounded
stride nor buffer size.
https://bugzilla.gnome.org/show_bug.cgi?id=752014
rawvideoparse wouldn't error out on not-negotiated,
but would just keep on going, because it didn't pass
the flow return value back to the parent class and
thus upstream, so the source wouldnt' stop streaming.
MSVC warns about this because it's a C++ compiler, and this actually
results in useful things such as the incorrect 'gboolean' return value
for functions that return GstFlowReturn, so let's do explicit
conversions to reduce the noise and increase its efficacy.
With MSVC, this gives the following warning:
warning C4305: 'function': truncation from 'double' to 'gfloat'
Apparently, MSVC does not figure out what type to use for constants
based on the assignment. This warning is very spammy, so let's try to
fix it.
In file included from ../subprojects/gst-plugins-base/gst-libs/gst/video/video.h:27:0,
from ../subprojects/gst-plugins-bad/gst/segmentclip/gstvideosegmentclip.c:25:
../subprojects/gst-plugins-base/gst-libs/gst/video/video-format.h:27:39: fatal error: gst/video/video-enumtypes.h: No such file or directory
#include <gst/video/video-enumtypes.h>
^
compilation terminated.
https://ci.gstreamer.net/job/GStreamer-master-meson/269/console
In M2TS mode, we need an extra 4 bytes in the buffer, so need
to ensure the buffer can contain these. The allocation site
does not know the mode, so this is done in all cases.
This was a regression.
We only have a upstream-id via STREAM_START if we were in push-mode.
In pull-mode we need to create one.
Note: It would be good to eventually have that method (copied from
gst_pad_get_stream_id_internal()) public in the future
For each MpegTSBaseStream, we have a GstStream object which
subclasses can extend with information.
For each program a GstStreamCollection is created with all
GstStream from each stream.
When dealing with TIME-based input, the incoming stream could have
potentially changed completely.
In order to check whether it did or not, we need to re-check all sections
(PAT, PMT...). If it didn't, we will keep using the existing streams/pad,
and if it did we will act as if there was a program switch.
Fixes HLS streaming with decodebin3/playbin3
The default value of D-bit is changed to TRUE so discontinuity
is set for initial request and seek request as well.
Only set the e_bit flag for the CUSTOM_DOWNSTREAM event if
a cached buffer exists.
https://bugzilla.gnome.org/show_bug.cgi?id=770221
EAC3 bit streams shall be identified with a stream_type value of 0x87 when
transmitted as PES streams conforming to ATSC-published standards. It is specified
in ATSC Standard A/52.
https://bugzilla.gnome.org/show_bug.cgi?id=770528
The headers passed as parametter are relative to the build dir
basically "../subproject/gst-plugins-bad/gst-libs/gst/mpegts/XXX.h"
but that does not match what is needed at build time when building as
subproject, also we always add current dir as include_dir so we are
safe including directly.
And link mpegtsdemux against the 'math' library as it is needed.
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Matej Knopp <matej.knopp@gmail.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
_stdint.h is generated by Autotools and we don't really need it. All
supported platforms now ship with stdint.h. The only stickler was MSVC,
and since Visual Studio 2015 it also ships stdint.h now.
After seeking in aiff files the information about the data end offset is
discarded, leading to audio artifacts with metadata chunks at the end of
a file.
This patch retains the end offset information after a seek event.
https://bugzilla.gnome.org//show_bug.cgi?id=769376
timecodewait receives a timecode as an argument (either as string or as
GstVideoTimeCode - one is gst-launch-friendly and the other is code-friendly),
and it will drop all audio and video buffers until that timecode has been
reached.
https://bugzilla.gnome.org/show_bug.cgi?id=766419
When draining a program, we might send a newsegment event on the pads
that are going to be removed (and then the pending data).
In order to do that, calculate_and_push_newsegment() needs to know
what list of streams it should take into account (instead of blindly
using the current one).
All callers to calculate_and_push_newsegment() and push_pending_data()
can now specify the program on which to act (or NULL for the default
one).
Fixing the following warning when generating documentation:
xml/element-gaussianblur.xml:72: element refsect2: validity error :
ID GstGaussianBlur already defined
<refsect2 id="GstGaussianBlur" role="typedef">
^
Warning: multiple "IDs" for constraint linkend: GstGaussianBlur.
DOC Fixing cross-references
Fixing the following warning when generating documentation:
xml/element-chromium.xml:74: element refsect2: validity error :
ID GstChromium already defined
<refsect2 id="GstChromium" role="typedef">
^
Warning: multiple "IDs" for constraint linkend: GstChromium.
DOC Fixing cross-references
When skipping data, check if they are filler bytes. If so, drop the
data instead of skipping. We don't want to output filler bytes, but they
shouldn't cause a discontinuity.
https://bugzilla.gnome.org/show_bug.cgi?id=768125
If the input alignment claims AU alignment, each received
buffer should contain a complete video frame, so never hold over parts
of buffers for later processing. Also reduces latency, as packets
are parsed/converted and output immediately instead of 1 buffer
later.
Fixes a problem where an (arguably disallowed) padding byte on the
end of a buffer is detected as an extra byte in the following
start code, and messes up the timestamping that should apply to
that start code.
This is an automatic update with manual merges of running
"make update" in the doc/plugins directory. This should help
later maintenance of the plugins doc. A lot of plugin are
not referenced yet in the doc. Will come later.
And always set the sampling field on the src caps, if necessary guessing a
correct value for it from the colorspace field.
Also, did some cleanup: removed sampling enum - redundant.
https://bugzilla.gnome.org/show_bug.cgi?id=766236
The heuristic to choose between packetise or not was changed to use the
segment format. The problem is that this change is reading the segment
during the caps event handling. The segment event will only be sent
after. That prevented the decoder to go in packetize mode, and avoid
useless parsing.
https://bugzilla.gnome.org/show_bug.cgi?id=736252
A simple fix for the problem of creating new pads with duplicate
names when switching program, easier than the alternative of
trying to work out which pads might persist and manage that.
See https://bugzilla.gnome.org/show_bug.cgi?id=758454
Remove code that dealt with odd strides separately - there's
not really any overhead to just using 1 codepath for both matched
and unmatched stride output.
Add separate codepaths for BE vs LE GRAY16 input so they're
handled properly
As is done everywhere else, and avoids setting bogus values
And remove useless *<val> checks (we always provide valid values and
it's an internal function).
CID #1320700
This helps in cases where raw audio data is being delivered, but the
buffers do not come in sample aligned sizes. The new unalignedaudioparse
bin can be autoplugged and configures an internal audioparse element to
align the data. audioparse itself gets support for audio/x-unaligned-raw
input caps; the output caps then contain the same information, except that
the name is changed to audio/x-raw (since audioparse aligns the data).
This ensures that souphttpsrc ! audioparse still works.
https://bugzilla.gnome.org/show_bug.cgi?id=689460
When scanning for SCR / PTS / DTS, handle the case where
the pack header is followed by the optional system header,
so we can correctly collect timestamps in such cases.
https://bugzilla.gnome.org/show_bug.cgi?id=623860
When the file size is smaller than the configured 4MB scan
limit for timestamps, don't underflow the guard variable
when checking if it's time to stop.
Limit the backward SCR scan to the same 4MB as the PTS scan.
Add some comments.
Adds a new function to mpegts lib to create a iso639 language
descriptor from a language and use it in mpegtsmux to add
a language descriptor to audio streams that have a language set.
https://bugzilla.gnome.org/show_bug.cgi?id=763647
When the sub-class is delaying deactivation of the old program,
but it has the same program number as the new program, don't
overwrite the old program in the hash table and then steal
the new program back out of it. Instead, add the new program to
the hash table after handling removal of the old one.
This way we can use the base class for buffer allocation, hence use
fill() instead of create() virtual. This also adds a strict check on the
select pool buffer size as we don't support strides and padding.
This is based on initial patch proposed by Sebastien Dröge, from which I
also fixed a buffer pool leak.
https://bugzilla.gnome.org/show_bug.cgi?id=763441
As we currently only use the server reported "natural" format, caps
negotiation should simply be limited to telling the base class which
format to use. Fix the negotiation by moving the associated code
into negotiate() virtual function. Also, use gst_base_src_set_caps()
rather then setting it on the pad directly. Also protect against this
method being called multiple time (we can't renegotiate for now).
This change also moves some network code that was being run during the
application state change call, to be run on the streaming thread.
https://bugzilla.gnome.org/show_bug.cgi?id=739598
Although it's not very well documented, g_input_stream_read_all() will
set the number of bytes read to 0 if the connection is closed rather
then returning an error.
This prevents recursion on error. This used to happen as we
don't change the state when something fails. We end up running
and failing in the same state forever.
Using GSocketClient we can simplify a lot the read/write operation.
This also provide an GSocketConnection (a GIOStream) which can then
be used with the GTlsClientConnection for secure connections. Note
that we use _write_all() to ensure all bytes have been read. This is
to follow the fact the none of the _send() calls check the return
value.
When the security cannot be negotiated, the server returns
security type of 0 (failure). In that case, the next step is
to read the error reason string.
We get into this code path if the profile is already constrained-baseline and
downstream does not support constrained-baseline. So we should try baseline
and the other compatible profiles.
https://bugzilla.gnome.org/show_bug.cgi?id=764448
Request pads are requested by applications and as such should only be released
by them again. Instead of releasing them when stopping the muxer, just clear
their state so that they can be used again when starting the muxer again.
https://bugzilla.gnome.org/show_bug.cgi?id=763862
The parser handles the downstream force-key-unit event incorrectly,
it tries to parse it as an upstream force-key-unit event, does not
check the return value, and then uses uninitialized memory in
"all_headers" boolean variable.
https://bugzilla.gnome.org/show_bug.cgi?id=763793
When the sub-class claims a program for later freeing, make
sure it's not left in the hash table, or it can cause crashes on shutdown.
Make sure tsdemux frees any program it has kept around at shutdown
if it wasn't freed already.
https://bugzilla.gnome.org/show_bug.cgi?id=763503
This is a regression from since mpegvideoparser was switched to
use the codecparsing library.
The problem is that the high bit of the profile_and_level is used
to specify non-hierarchical profiles and levels. Unfortunately we
were discarding that information.
Expose that escape bit, and use it in the element
https://bugzilla.gnome.org/show_bug.cgi?id=763220
In some cases, the PTS might be smaller than the first observed PCR
value which causes element to apply wraparound leading to bogus
timestamp. To solve this, we only apply it if the PTS-PCR difference is
greater that 1 second to be sure that it's a real wraparound.
Moreover, using unsigned 32 bits values to handle wrapover could end up
with bogus value, so it use pts value to handle it.
Also, convert pcr time to gst time before comparing it to pts.
Since refpcr is expressed in PCR time base while pts is expressed in GStreamer
time.
https://bugzilla.gnome.org/show_bug.cgi?id=743259
Enabling passthorugh mode is causing multiple issue:
For nal aligned multiresoluton streams, passthrough mode
make h264parse unable to advertise the new resoultions.
Also causing issues while parsing MVC streams which have two
separate layers (base-view and non-base-view).
This fix is only a temporary workaround.
For MVC, proper fixes needed in many places:
(handle prefix nal unit, handle non-base-view slice nal extension,
fix the picture_start detection for multi-layer-mvc streams etc)
https://bugzilla.gnome.org/show_bug.cgi?id=758656
Remove calls to gst_pad_has_current_caps() which then go on to call
gst_pad_get_current_caps() as the caps can go to NULL in between. Instead just
use gst_pad_get_current_caps() and check for NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=759539
Set fallback channel layout on files with more than two
channels. Not clear where to retrieve the real layout from
or what the default layout is for AIFF files, the spec
only seems to specify some layout for up to 6 channels
and the file in question doesn't have a CHAN chunk.
https://bugzilla.gnome.org/show_bug.cgi?id=676425
This fixes a couple of issues regarding the output of (request)
per-program pads output:
We would never push out PAT sections (ok, that was one reallly stupid
mistake. I guess nobody ever uses this feature ...).
In the case where the PMT section of a program was bigger than one
packet, we would only end up pushing the last packet of that PMT. Which
obviously results in the resulting stream never containing the proper
(complete) PMT.
The problem was that the program is only started (in the base class)
after the PMT section is completely parsed. When dealing with single-program
pads, tsparse only wants to push the PMT corresponding to the requested
program (and not the other ones). tsparse did that check by looking
at the streams of the program...
... but that program doesn't exist for the first packets of the initial
PMT.
The fix is to use the base class program information (if it parsed the
PAT it already has some information, like the PMT PID for a given program)
if the program hasn't started yet.
In addition to the fact that it's a sane thing to do for multi-source
pad elements, it also avoids the situation where just using a request
pad (and not the main static pad) would result in the processing
stopping.
tsdemux is not able to handle negative playback rates.
But in mpegtsbase, the same check is not being done.
added a check to not handle negative rate while seeking unless
the same is handled upstream.
https://bugzilla.gnome.org/show_bug.cgi?id=758516
Since commit b77f8e172a the new value
assigned to mview_mode hasn't been used. That commit changed the following
"if" check to an "else if", which means the original value of mview_mode
is used.
When converting from avc to byte-stream, there will not be any codec_data
in the src caps. Remove it before the equality check to avoid sending caps
events downstream on every SPS/PPS change.
https://bugzilla.gnome.org/show_bug.cgi?id=761014
If we have a stream that contains an unchanging SPS/PPS for every video frame,
we don't need to to constantly query downstream for it's supported caps if the
current caps are compatible with the negotiated caps.
https://bugzilla.gnome.org/show_bug.cgi?id=761014
When the framesize is not specified, we try and calculate a size from
the strides and offset information. This was done with the sum of
offsets + the size of the last frame. That is just wrong method. We also
need to account for video meta that may be flipping two planes. An
example is if you convert I420 to YV12 by flipping the two last offsets.
https://bugzilla.gnome.org/show_bug.cgi?id=760270
To make parser work with image having non-standard strides, plane
offsets or with padding between images.
For now, since element doesn't check for videometa, we can't directly
push buffers when these properties are set so it convert the frame
in the pre_push_buffer method to remove any custom padding.
https://bugzilla.gnome.org/show_bug.cgi?id=760270
Allows the subclass to completely override the chosen src caps.
This is needed as videoaggregator generally has no idea exactly
what operation is being performed.
- Adds a fixate_caps vfunc for fixation
- Merges gst_video_aggregator_update_converters() into
gst_videoaggregator_update_src_caps() as we need some of its info
for proper caps handling.
- Pass the downstream caps to the update_caps vfunc
https://bugzilla.gnome.org/show_bug.cgi?id=756207
When sps data is NULL, the buffer allocated and mapped is not being freed.
In this scenario there is no need to allocate the buffer as we are supposed to return NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=761070
It's useful enough already to be used in other elements for audio aggregation,
let's give people the opportunity to use it and give it some API testing.
https://bugzilla.gnome.org/show_bug.cgi?id=760733
It's not enough to have timeout or event based VPS/SPS/PPS information
sent in RTP packets. There are some scenarios when key frames may appear
more frequently than once a second, in which case the minimum timeout
for "config-interval" of 1 second for sending VPS/SPS/PPS isn't enough.
It might also be desirable in general to make sure the VPS/SPS/PPS is
available with every keyframe (packet loss aside), so receivers can
actually pick up decoding immediately from the first keyframe if
VPS/SPS/PPS is not signaled out of band.
This commit adds the possibility to send VPS/SPS/PPS with every key frame.
This mode can be enabled by setting "config-interval" property to -1. In
this case the payloader will add VPS, SPS and PPS before every key (IDR)
frame.
https://bugzilla.gnome.org/show_bug.cgi?id=757892
The MPEG standard (ISO-13880-1) says the reserve bits need to be set
to one (2.1.64). This is causing transport streams to fail validation
on some systems.
https://bugzilla.gnome.org/show_bug.cgi?id=760127
This works usually in this place, unless the compiler optimizes things in
interesting ways in which case it causes stack corruption and crashes later.
The compiler in question here is clang with -O1, which seems to pack the stack
a bit more and causes writing to the guint as pointer to overwrite map.memory,
which then later crashes during unmapping of the memory.
While encoding the frame in ASCII mode, per component four bytes are needed
and after every 20 bytes, a \n will be added. So the calculation should be
size = size * (4 + 1 / 20). This should exclude the header being written.
Since header is also being included in the calculations, memory mishandlings
are happening.
https://bugzilla.gnome.org/show_bug.cgi?id=759520
rename gst-launch --> gst-launch-1.0
replace old elements with new elements(ffmpegcolorspace -> videoconvert, ffenc_** -> avenc_**)
fix caps in examples
https://bugzilla.gnome.org/show_bug.cgi?id=759432
The edit rate is only supposed to be the same in a source package, but there
might be multiple source packages with the same essence container. As such
just comparing the body/index SID is not sufficient.
This was completely broken before and could only work on a very constrained
set of files. After these changes it should work except for situations where
PTS != DTS, which is not handled at all in mxfdemux currently.
https://bugzilla.gnome.org/show_bug.cgi?id=759118
According to S377-1-2009c 9.2 the local tags must not be resolved from the
primer pack, which as a result means that there can't be any other tags than
statically assigned ones.
This is to support byte-stream decoder that does not remember the
PPS/SPS after a flush. This is not needed by all decoders, but is
harmless for those that do remember.
https://bugzilla.gnome.org/show_bug.cgi?id=758405
The order in which program switch must happen is:
1) drain all data on old pads (but don't push EOS)
2) add new pads (but don't push any data on them)
3) Push EOS and remove old pads
4) Start pushing data on new pads
There was one caveat in this implementation, which is that when
we activate a sparse pad (step 2) we would push a GAP event. The problem
is that, while being an event, it is actually *data*.
We therefore need to make sure pushing those GAP event is done at the step
we start pushing data.
https://bugzilla.gnome.org/show_bug.cgi?id=750402
Before we add any streams, make sure we drain all streams. This ensures
there's consistency that only "new" data will be pushed on buffers once
the new pads are added
https://bugzilla.gnome.org/show_bug.cgi?id=750402
When changing programs, the order of events needs to be the following:
* add pads from new program
* send EOS on old pads
* remove old pads
* emit 'no-more-pads'
Previously tsdemux was not doing that, and was first deactivating and
removing old pads before adding new ones.
We fix this by allowing subclasses of mpegtsbase to be able to handle
themselves the deactivation of programs. In this case tsdemux will
properly deactivate it once it has activated the new program.
https://bugzilla.gnome.org/show_bug.cgi?id=750402
The videoframe-audiolevel element acts like a synchronized audio/video "level"
element. For each video frame, it posts a level-style message containing the
RMS value of the corresponding audio frames. This element needs both video and
audio to pass through it. Furthermore, it needs a queue after its video
source.
https://bugzilla.gnome.org/show_bug.cgi?id=748259
0x04 signifies a MPEG elementary stream but according to RP2008, 0x10 should
be used for a h264 byte-stream. This also fixes compatibility of our files
with ffmpeg.
If packet->payload_unit_start_indicator is true and pointer 0, there is no
discontinuity check. Therefore there could be a previous section not complete
that need to be cleared.
https://bugzilla.gnome.org/show_bug.cgi?id=758010
The values of channel_mapping are copied by gst_codec_utils_opus_create_caps ()
but it doesn't free or take ownership of the g_new0 allocated memory. This
needs to be freed before going out of scope.
CID 1338692
buf surely isn't NULL inside the block conditional to a buffer size bigger
than (G_MAXUINT16 - 3). Plus gst_buffer_unref() checks if the buffer is
NULL and does nothing if it is.
CID 1338693
If tsdemux never receives data for a stream, the corresponding pad will never
be added and stream->active will remain FALSE. When the stream is removed, the
pad will not be unreffed and will be leaked.
https://bugzilla.gnome.org/show_bug.cgi?id=757873
The current implementation for detecting the resolution changes
on key frames is based on vp8 bitstream alignment. Avoid this
width and height parsing for vp9 bitstream, which requires proper
frame header parsing inorder to detect the resolution change (Fixme).
https://bugzilla.gnome.org/show_bug.cgi?id=757825
It is up to the element handling the seek to send flush events
downstream, otherwise we end up with a situation where upstream
would get unexpected GST_FLOW_FLUSHING
The Onvif Streaming Specification specifies that the NTP timestamps
in the Onvif extension header indicaes the absolute UTC time associated
with the access unit. But by using running time we can not achieve that,
since a frame's running time depends on the played interval, whether a
non-flushing is done, etc. Instead we have to use the stream time.
https://bugzilla.gnome.org/show_bug.cgi?id=757688
It is now possible to update the currently used ntp-offset with a
custom serialized downstream event. The element will read the ntp-offset
property when doing the state transition from READY to PAUSED and
use that offset until it receives a "GstNtpOffset" event, which also
has a "ntp-offset" attribute in that it's structure. In case the
property is not set and no event has been received, the element will
guess the npt-offset with help of the clock. If no clock can be
retrieved, the element will error out and stop the data flow.
The same event is also used for updating the D/E-bits in the RTP
extension header. The discont flag in a buffer can be set whenver a
live/network source looses a frame, but that is not the type of
discontinuity that the onvif extension header should reflect. The
header is mainly used for playback of a track concept, in which
gaps can be present, and it's those kind of gaps that should be
highlighted with the D- and E-bits.
https://bugzilla.gnome.org/show_bug.cgi?id=757688
If a buffer or a buffer list is cached, no events serialized with the
data stream should get through. The cached buffers and events should
be purged when we stop flushing.
https://bugzilla.gnome.org/show_bug.cgi?id=757688
Otherwise those symbols can conflict with external libraries when
linking everything statically for mobile targets.
Use the gst_gm_ prefix, short for gst geometric math.
https://bugzilla.gnome.org/show_bug.cgi?id=756882
Store and copy input state fields when setting the
output state of the decoder. Avoids problems like
the framerate set by an upstream element being ignored
Related to:
https://bugzilla.gnome.org/show_bug.cgi?id=756563
New subclass with a similar behaviour as the old liveadder, but
a slightly different API as the latency is in nanoseconds, not
milliseconds. Also, the new liveadder has a effective latency that
is latency + output-buffer-duration. In practice, just setting a non-zero
latency with the new audiomixer gives you the right behavior in 99% of the
cases.
Build error due to wrong argument type in debug message
aagg->priv->offset and next_offset are of type int64, but uint64
formatter is being used in logs. Changing all those to int64
https://bugzilla.gnome.org/show_bug.cgi?id=756065
When g_option_context_parse fails, context and error variables are not getting free'd
which results in memory leaks. Free'ing the same.
And replacing g_error_free with g_clear_error, which checks if the error being passed
is not NULL and sets the variable to NULL on free'ing.
https://bugzilla.gnome.org/show_bug.cgi?id=753854
The buffer timestamps in the collect function will already be
running time, don't try to convert them again to running time,
this would yield CLOCK_TIME_NONE now that the segment is shifted
to account for negative dts.
This fixes x264enc ! mpegtsmux ! hlssink, which was broken
because mpegtsmux would send a downstream key unit event with
running time NONE and then hlssink would immediately send
another one upstream and it would just be a flood of force
keyframe events in both directions after the first one. This
would then break hlssink because it uses multifilesink in
next-file=key-unit-event mode, and starting a new file after
every few kB does not work well for HLS.
The alsamidisrc element allows to get input event from ALSA MIDI
sequencer devices, and possibly convert them to sound using some
downstream element like fluiddec.
https://bugzilla.gnome.org/show_bug.cgi?id=738687
An IDX file or codec_data normally contains the original frame size of
the video. Allow upstream to provide this information by sending a
custom event, which will allow scaling the overlay correctly.
https://bugzilla.gnome.org/show_bug.cgi?id=663750
Instead of only supporting writing SPU data directly to YUV frames,
render the SPU data to an intermediate AYUV overlay buffer. The overlay
data is then attached to the video frame if downstream supports overlay
composition, otherwise the AYUV overlay is blended to the video frame.
For the PGS format, the overlay buffer size is set to the size of the
Composition Window, and its position in the overlay composition is set
to the window position. The objects to render are now cropped when the
cropping flag is set.
For the Vobsub format, the overlay buffer size is set to the size of the
Display Area.
Once rendered, the overlay composition rectangle is now moved and scaled
to fit the video output size, to avoid clipping.
https://bugzilla.gnome.org/show_bug.cgi?id=663750
We might've queued up a GAP buffer that is only partially inside the current
output buffer (i.e. we received it too late!). In that case we should only
skip the part of the GAP buffer that is inside the current output buffer, not
also the remaining part. Otherwise we forward this pad too far into the future
and break synchronization.
Derive from GstVideoSink so that preroll frames will automatically
get rendered too, unless the show-preroll-frame property is set to
FALSE. Fixes intervideosrc only picking up frames if intervideosink
is in PLAYING state.
https://bugzilla.gnome.org/show_bug.cgi?id=755049
Fix the negotiation of GstVideoOverlayComposition by checking
intersection with the peer caps, rather than just accept-caps,
which might only check the pad template.
https://bugzilla.gnome.org/show_bug.cgi?id=755113
We have to queue buffers based on their running time, not based on
the segment position.
Also return running time from GstAggregator::get_next_time() instead of
a segment position, as required by the API.
Also only update the segment position after we pushed a buffer, otherwise
we're going to push down a segment event with the next position already.
https://bugzilla.gnome.org/show_bug.cgi?id=753196
x/y/w/h are signed integers. As can be seen in GstCompositorPad.
The prototype for clamp_rectangle was wrong. This commit reverts the change
and fixes the prototype.
This reverts commit bca444ea4a.
CLAMP checks both if value is '< 0' and '> max'. Value will never be a negative
number since it is an unsigned integer. Removing that check and only checking if
it is bigger than max by using MIN().
CID 1320707
As it's recursive, gst_pad_get_allowed_caps() may also return
empty for anything incompatible downstream. EMPTY is not valid caps
value for gst_caps_fixate(). This lead to assertion and then crash.
Ideally, the negotiate function should be re-factored to have a return
value, and we could make the negotiation fails earlier.
https://bugzilla.gnome.org/show_bug.cgi?id=754122
The obscured check in compositor was using the dimensions of the pad to clamp
the h/w of the pad instead of the output resolution, and was doing an incorrect
calculation to do so. Fix that by simplifying the whole calculation by using
corner coordinates. Also add a test for this bug which fell through the cracks,
and just skip all the obscured tests if the pad's alpha is 0.0.
https://bugzilla.gnome.org/show_bug.cgi?id=754107
The tsdemux latency should always be added to the minimum
latency (which is always a valid clock time value). The
"cleanup" in commit a1f709c2 made it so that it would not
be added if upstream reported 0 as minimum latency (as
e.g. udpsrc would). This broke playback of live mpeg-ts
streaming in some cases, leading to playback stutter due
to a too-small configured latency for the pipeline.
https://bugzilla.gnome.org/show_bug.cgi?id=751508
In gst_live_adder_chain() function, calls to gst_buffer_copy_region() can lead
to assertion as 'offset + size <= bufsize' is not respected.
Indeed 'offset' and 'size' parameters are calculated through calling gst_live_adder_length_from_duration(),
and thus gst_util_uint64_scale_int_round().
Depending on the nearest integers, rounded values 'offset' and 'size' can then trigger the assertion.
This case mainly occurs when 'skip' value is > 0 in chain function process.
https://bugzilla.gnome.org/show_bug.cgi?id=753759
It is faster than doing a query that propagates downstream and
should be enough
Elements: faac, gsmenc, opusenc, sbcenc, voamrwbenc, adpcmenc, sirenenc
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without
tags or with only the video tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
A race condition in the state change function may cause buffers to be
unreffed while they are still used by the streaming thread in
gst_rtp_h265_pay_send_vps_sps_pps() resulting in a crash. Chain up to the
parent class first in the state change function to make sure streaming
has stopped and only then free those buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=741381
The SPS struct might be filled out by a call to
gst_h264_parser_parse_subset_sps, which fills out
dynamically allocated data and requires a call
to gst_h264_sps_clear() to free it. Also make sure
to clear out any allocated SPS data when returning
an error.
https://bugzilla.gnome.org/show_bug.cgi?id=753306
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without
tags or with only the video tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
When playing mts files with embedded subtitles, the buffer is mapped,
but not unmapped at the end resulting in a memory leak.
Also unref event in handle_dvd_event as it takes ownership of the event.
https://bugzilla.gnome.org/show_bug.cgi?id=753539
When playing mts files with embedded subtitles, there are few event leaks.
Events are supposed to be transfer full. So if not forwarding the event,
they need to be freed.
https://bugzilla.gnome.org/show_bug.cgi?id=753539
h264parse and gstrtph264depay do the same, let's keep the behaviour
consistent. As we now include the codec_data inside the stream, this causes
less caps renegotiation.
https://bugzilla.gnome.org/show_bug.cgi?id=753228
rtph264depay does the same and this fixes decoding of some streams with 32
SPS (or 256 PPS). It is allowed to have SPS ID 0 to 31 (or PPS ID 0 to 255),
but the field in the codec_data for the number of SPS or PPS is only 5
(or 8) bit. As such, 32 SPS (or 256 PPS) are interpreted as 0 everywhere.
This looks like a mistake in the part of the spect about the codec_data.
This causes an assertion and would lead to getting a NULL instead
of a buffer. Without proper checking this would easily lead to a
segfault.
Related to rpth264depay: https://bugzilla.gnome.org/show_bug.cgi?id=737199
String parameters are expected to be passed as (f0r_param_string *),
which actually map to char**. In the filters this is evaluated as
(*(char**)param) which currently lead to crash when passing char*.
Remove the special case for string, all types, including char* as
passed as a reference.
https://phabricator.freedesktop.org/T83