Making the event itself writable is not enough, it won't make
the actual taglist in the event writable as well. Instead, just
make a copy of the taglist and then create a new tag event from
that if required, replacing the old one. Before we would
inadvertently modify taglists upstream elements might still
be holding on to. Add unit test for this as well.
https://bugzilla.gnome.org/show_bug.cgi?id=762793
In some cases the stream configuration can fail, for instance if the
stream is protected and no decryptor was found. For those situations
the demuxer shouldn't emit any data on the corresponding source pad of
the stream and bail out.
https://bugzilla.gnome.org/show_bug.cgi?id=762516
When the cenc metadata is stored outside of the moof box and the
stream is exposed it is possible that the cenc metadata hasn't been
processed yet while the first buffer is being pushed. When this
happens the buffer can't possibly be decrypted downstream so don't
push it.
https://bugzilla.gnome.org/show_bug.cgi?id=762516
Remove calls to gst_pad_has_current_caps() which then go on to call
gst_pad_get_current_caps() as the caps can go to NULL in between. Instead just
use gst_pad_get_current_caps() and check for NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=759539
Remove calls to gst_pad_has_current_caps() which then go on to call
gst_pad_get_current_caps() as the caps can go to NULL in between. Instead just
use gst_pad_get_current_caps() and check for NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=759539
If we have an error during fwrite call, file stays open and thus next
incoming buffer will trigger an assert when trying to opening a new
file.
This happens if we do not restart element, file is closed at stop, and
if application handles the returned GST_FLOW_ERROR to keep bin alive.
https://bugzilla.gnome.org/show_bug.cgi?id=762434
VP9_PAY_PICTURE_ID_7BITS and VP9_PAY_PICTURE_ID_15BITS are mutually
exclusive options of the picture-id-mode. We can break after the
first case.
1 or 2 bytes need to be added to the header length depending on the
PictureID size.
https://tools.ietf.org/html/draft-uberti-payload-vp9-00#section-4.2
CID 1353479
Commit 7873bede31
(https://bugzilla.gnome.org/show_bug.cgi?id=760774) changed the
behaviour of qtdemux to call gst_qtdemux_configure_stream() for
every new moof.
When playing a protected stream, gst_qtdemux_configure_stream()
calls gst_qtdemux_configure_protected_caps(). The
gst_qtdemux_configure_protected_caps() function takes the original
media format, puts this in a field called "original-media-type"
and then changes the caps to "application/x-cenc".
The gst_qtdemux_configure_protected_caps() did not handle the case
of being called multiple times, causing it to incorrectly set the
caps. The second call was causing the caps to be set to:
application/x-cenc, original-media-type"application/x-cenc"
This commit makes gst_qtdemux_configure_protected_caps() check that
the caps have already been transformed, so that it only gets
changed once.
https://bugzilla.gnome.org/show_bug.cgi?id=761769
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without tags or
with only the audio tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
Both Firefox and Chrome uses OPUS as the encoding in their SDP.
Adding this now defacto standard name remove the need for special
case in SDP parsing code.
https://bugzilla.gnome.org/show_bug.cgi?id=737810
This will help elements that cannot deal with multistream,
such as the RTP payloader.
The caps now do not include a "streams" field anymore, but
a "multistream" boolean, since we have no real use for knowing
the exact amount of streams.
https://bugzilla.gnome.org/show_bug.cgi?id=665078
Send GAP events for non-subtitle streams too if they lag too much
behind, but use a higher threshold than for subtitles.
This helps with fixing prerolling with a file where one of the
audio streams only has data starting from 19s onwards. It's not
a complete fix yet, it also requires changes elsewhere, such as
in baseparse, to make sure caps are propagated.
https://bugzilla.gnome.org/show_bug.cgi?id=614460https://bugzilla.gnome.org/show_bug.cgi?id=753899
Quick and dirty implementation of an RTP payloader and depayloader
for VP9. In particalur it assumes no spatial or temporal layering,
non-flexible mode, and some other bits and pieces.
https://bugzilla.gnome.org/show_bug.cgi?id=754773
Looks like it just uses the NAL enums and nothing else from
the codecparsers, and that's the only reason it had to be
moved from -good to -bad when it was originally added. We
can probably keep those NAL enums up to date enough, so let's
remove the codecparser dependency so it can be moved back into
-good.
https://bugzilla.gnome.org/show_bug.cgi?id=761606
It's not enough to have timeout or event based VPS/SPS/PPS information
sent in RTP packets. There are some scenarios when key frames may appear
more frequently than once a second, in which case the minimum timeout
for "config-interval" of 1 second for sending VPS/SPS/PPS isn't enough.
It might also be desirable in general to make sure the VPS/SPS/PPS is
available with every keyframe (packet loss aside), so receivers can
actually pick up decoding immediately from the first keyframe if
VPS/SPS/PPS is not signaled out of band.
This commit adds the possibility to send VPS/SPS/PPS with every key frame.
This mode can be enabled by setting "config-interval" property to -1. In
this case the payloader will add VPS, SPS and PPS before every key (IDR)
frame.
https://bugzilla.gnome.org/show_bug.cgi?id=757892
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without
tags or with only the video tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
A race condition in the state change function may cause buffers to be
unreffed while they are still used by the streaming thread in
gst_rtp_h265_pay_send_vps_sps_pps() resulting in a crash. Chain up to the
parent class first in the state change function to make sure streaming
has stopped and only then free those buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=741381
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without
tags or with only the video tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
h264parse and gstrtph264depay do the same, let's keep the behaviour
consistent. As we now include the codec_data inside the stream, this causes
less caps renegotiation.
https://bugzilla.gnome.org/show_bug.cgi?id=753228
rtph264depay does the same and this fixes decoding of some streams with 32
SPS (or 256 PPS). It is allowed to have SPS ID 0 to 31 (or PPS ID 0 to 255),
but the field in the codec_data for the number of SPS or PPS is only 5
(or 8) bit. As such, 32 SPS (or 256 PPS) are interpreted as 0 everywhere.
This looks like a mistake in the part of the spect about the codec_data.
This causes an assertion and would lead to getting a NULL instead
of a buffer. Without proper checking this would easily lead to a
segfault.
Related to rpth264depay: https://bugzilla.gnome.org/show_bug.cgi?id=737199
type is truncated to 0-31 with "& 0x1f", but right after that it is checks if
the value is equivalent to GST_H265_NAL_VPS, GST_H265_NAL_SPS, and
GST_H265_NAL_PPS (which are 32, 33, and 34 respectively). Obviously, this will
never be True if the value is maximum 31 after the truncation.
The intention of the code was to truncate to 0-63.
After further investigation the previous commit is wrong. The code intended to
check if the type is 39 or the ranges 41-44 and 48-55. Just like gsth265parse.c
does. Type 40 would not be complete.
nal_type is the index for a GstH265NalUnitType enum. There are two types of dead
code here:
First, after checking if nal_type is >= 39 there are two OR conditionals that
check if the value is in ranges higher than that number, so if nal_type >= 39
falls in the True branch those other conditions aren't checked and if it falls
in the False branch and they are checked, they will always also be False. They
are redundant.
Second, the enum has a range of 0 to 40. So the checks for ranges higher than 41
should never be True.
Removing this redundant checks.
CID 1249684
The running average was wrong and the resulting scaling factor was only held in
place using the CLAMP. In addtion we are now convering quickly to volume
changes.
FInally now with this change, we can change the resolution defines and
everythign adjusts.
Commit bd27a1f30b added a few error handling
memory management checks. These check srccaps to see if it needs to be
unreferenced before returning, in the case of invalid_caps this goto jump
always happens before srccaps is set, so it will always be NULL in this
error label.
CID #1352035
For APP/JPG markers the size is following and we have to skip that. This is
not really a problem unless the marker contains e.g. a preview JPEG or
something else that we might interprete as another marker.
qtdemux calculates framerate using duration and the number of sample.
In case of fragmented mp4 format, however, the number of sample can
be figure out after parsing every moof box. Because qtdemux does not
parse every moof in QTDEMUX_STATE_HEADER state, it will cause incorrect
framerate calculation.
This patch will triger gst_qtdemux_configure_stream() for every new moof.
Then, framerate will be calculated by using duration and n_samples of the moof.
https://bugzilla.gnome.org/show_bug.cgi?id=760774
Based on document ISO_IEC_14496-12, edit list box can have
segment duration as zero. It does not imply that media_start equals to
media_stop. But, it just indicates a sample which should be presented
at the first. This patch derives segment duration using media_time
and duration of file. And set derived duration to segment-duration.
https://bugzilla.gnome.org/show_bug.cgi?id=760781
In case of push mode, qtdemux expose streams after got moov box.
We can not guarantee that a moov box has sample data such as sample duration
and the number of sample in stbl box for fragmented format case.
So, if a moov has no sample data, streams will not be exposed until get the first moof.
https://bugzilla.gnome.org/show_bug.cgi?id=760779
If the following conditions are met:
1) upstream and downstream caps are compatible
2) upstream is interlaced
3) downstream doesn't support progressive mode
then deinterlace will just do passthrough instead of failing to link.
This is done with the following scenario in mind:
videotestsrc ! "video/x-raw,interlace-mode=interleaved" ! deinterlace
name=dein_src ! tee name=t ! queue ! deinterlace name=dein_file ! filesink t. !
queue ! deinterlace name=dein_desktop ! autovideosink
In this case, dein_src will do the deinterlacing. However,
videotestsrc ! "video/x-raw,interlace-mode=interleaved" ! deinterlace
name=dein_src ! tee name=t ! queue ! deinterlace name=dein_file ! filesink t. !
queue ! deinterlace name=dein_desktop ! autovideosink t. ! queue !
"video/x-raw,interlace-mode=interleaved" ! fakesink
In this case, caps auto-negotiation will make dein_file and dein_desktop do
the deinterlacing, while dein_src will be passthrough.
https://bugzilla.gnome.org/show_bug.cgi?id=760995
In this mode we will passthrough all progressive caps but interlaced caps must be
caps where we actually support deinterlacing.
This is the only difference between auto and auto-strict, auto would
passthrough all unsupported interlaced caps.
https://bugzilla.gnome.org/show_bug.cgi?id=720388
Previously the result of the CAPS query and ACCEPT_CAPS depended on what kind
of caps were last set, and e.g. if we last had interlaced caps or not. That's
just broken.
Also previously the handling of non-sysmem caps features was rather random and
unusuable.
Now the behaviour is the following, depending on the mode property:
1) mode=disabled
Completely do passthrough of everything
2) mode=interlaced
Only accept formats we can actually deinterlace, and accept interlaced
and progressive content and always run the deinterlacer and output
progressive content
3) mode=auto (i.e. playbin)
Accept all progressive formats as passthrough, accept all formats that we
can deinterlace ourselves (which we do then), but also accept everything
else for which we then just passthrough. In auto mode, deinterlacing is best
effort: If we can, we deinterlace, if we can't we just output interlaced
content.
https://bugzilla.gnome.org/show_bug.cgi?id=720388https://bugzilla.gnome.org/show_bug.cgi?id=760553
In file included from gstrtpL16depay.h:27:0,
from gstrtp.c:73:
gstrtpchannels.h:154:33: error: 'channel_orders' defined but not used [-Werror=unused-const-variable]
static const GstRTPChannelOrder channel_orders[] =
Especially in push mode we would completely ignore the size of the data chunk
when not stop position is given for the seek. Instead make sure that the end
offset is at most the end of the data chunk if known.
Without this we would output anything after the data chunk, possibly causing
loud noises if the media file is followed by an INFO chunk or an ID3 tag.
We use that to signal "infinity", taking the difference between that and some
other value is not going to give us any useful result for the end offsets of
segments.
Even if we have more data queued up when flushing than the size of the data
chunk, don't process and output it. If the data size is known, this likely
contains another chunk (e.g. an INFO chunk) or things like ID3 tags. Just
outputting them as if they were data is going to cause unexpected behaviour
and unpleasant audio noises.
The current example does not work, it fails with:
ERROR: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstWavParse:wavparse0: Internal data flow error.
gstwavparse.c(2178): gst_wavparse_loop (): /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstWavParse:wavparse0:
streaming task paused, reason not-negotiated (-4)
This is because negotiation with wavenc gets messed up by the missing
channel positions configuration.
The proper way to define the channel layout when using the interleave
element in code would be to set the channel-positions property, but
gst-launch-1.0 does not know how to deal with arrays; so the example
pipeline works around the issue by setting the channel-masks in the sink
pads.
Also fix a repetition in the deinterleave example description
https://bugzilla.gnome.org/show_bug.cgi?id=735673
SBC frame length calculation wasn't being rounded up to the nearest byte
(as specified in the A2DP 1.0 specification, section 12.9). This could
cause 'stereo' and 'joint stereo' mode SBC streams to have incorrectly
calculated frame lengths.
Incorrect frame length calculation causes frame coalescing to fail, as
subsequent frames in the stream aren't found in the expected locations.
https://bugzilla.gnome.org/show_bug.cgi?id=742446
The downstream caps query with a filter alraedy gives us the possible
intersection so there is no need to check it again with downstream
if it is supported. Just try to set it directly.
For someone that read the spec is clear the only *invalid*
data block type is 127. For the rest, its useful information.
Additionally. values 7-126 are currently reserved by the
spec so the situation might change in the future.
We are only interested on the first bit of the first
byte of the metadata block header to figure out whether
is marked as the last one. The shift makes it quite
clearer.
If we get anything from 7 to 126 as type when parsing
a metadata block header, we are likely dealing with a
FLAC stream version we don't fully understand. Issue
a warning if so.
Document function assumptions regarding the passed-on
type while at this.
As CRCs are calculated for the comparition already, we
might as well (cheaply) inform the user how the numbers
differ if a missmatched pair is found.
While at it:
Rephrase candidate-frame message to make more sense
Prevents downstream from receiving flushes for a seek only in
upstream. Those seeks are only to start reading from the right
offset when skipping or returning to qt atoms.
https://bugzilla.gnome.org/show_bug.cgi?id=758928
Avoid writing a negative number as a large positive
integer in an edit list when the first_ts is smaller
than the first_dts - which can happen when the first
packet received has a PTS but no DTS.
https://bugzilla.gnome.org/show_bug.cgi?id=759615
Don't increment running time from every buffer. The correct
logic to only increment when running time advances is a
little further down, so delete this left-over line.
Leverage response from gst_rtsp_connection_connect_with_response to
determine if the connection should be retried using authentication. If
so, add the appropriate authentication headers based upon the response
and retry the connection.
https://bugzilla.gnome.org/show_bug.cgi?id=749596
The string could exist but with a wrong format, in that case we still want
to reset the values of client_port_range.min and max like we do if there is
no string.
CID 1139593
We would queue 5 consective packets before considering a reset and a proper
discont here. Instead of expecting the next output packet to have the current
seqnum (i.e. the fifth), expect it to have the first seqnum. Otherwise we're
going to drop all queued up packets.
When working in push-mode, we attempt to push out everything currently
buffered in the adapter.
This has two pitfalls:
* We could stop earlier (the moment we get a non-ok or non-not-linked)
* We return the last combined flow return, which might be completely
different from the previous combined flow return
There might be multiple LOAS config in a row in a full frame. The first
one might be a multi-layer config (which we can't properly parse yet)...
but then followed by a valid (single-layer) one.
The code was previously skipping whole frames (instead of just the LOAS
config we failed to read) resulting in multiple frames (seen up to 6s in
some situation) being dropped before finally getting the configuration.
https://bugzilla.gnome.org/show_bug.cgi?id=758826
auds.blockalign is set once the first caps arrive. If
gst_avi_mux_stop_file() is called before this happens then auds.blockalign
is zero and gst_avi_mux_audsink_set_fields() cause a crash:
[...]
avipad->parent.hdr.rate = avipad->auds.av_bps / avipad->auds.blockalign;
[...]
https://bugzilla.gnome.org/show_bug.cgi?id=758912
It's not enough to have timeout or event based SPS/PPS information sent
in RTP packets. There are some scenarios when key frames may appear
more frequently than once a second, in which case the minimum timeout
for "config-interval" of 1 second for sending SPS/PPS is not sufficient.
It might also be desirable in general to make sure the SPS/PPS is
available with every keyframe (packet loss aside), so receivers can
actually pick up decoding immediately from the first keyframe if
SPS/PPS is not signaled out of band.
This patch adds the possibility to send SPS/PPS with every key frame. This
mode can be enabled by setting "config-interval" property to -1. In this
case the payloader will add SPS and PPS before every key (IDR) frame.
https://bugzilla.gnome.org/show_bug.cgi?id=757892
This way we can use -1 as special value, which is nicer than MAXUINT.
This is backwards compatible even with the GValue API, as shown by
a unit test.
https://bugzilla.gnome.org/show_bug.cgi?id=757892
generate_rtcp can produce empty packets when reduced size RTCP is turned on.
Skip them since it doesn't make sense to push them and they cause errors with
elements that expect RTCP packets to contain data (like srtpenc).
When seeking back to restore the mdat position a flush is pushed
through and it resets downstream segment information. Make sure
that after the flush (that does a soft reset) a segment will
be pushed again
Fixes regressions spotted at
https://ci.gstreamer.net/job/GStreamer-master-validate/2100/
10 FourCCs generated with GST_MAKE_FOURCC() in gstqtmux.c and atoms.c
already exist in fourcc.h. Don't duplicate these and use them directly.
Plus moving 6 to fourcc.h, to centralize them all.
This fixes seeking if the first entries in the samples table are negative. The
binary search would always fail on this as the array would not be sorted if
interpreting the negative numbers as huge positive numbers. This caused us to
always output buffers from the beginning after a seek instead of close to the
seek position.
Also add a case to the comparison function for equality.
Actual code is checking for a NULL terminator and a ';' terminator,
for backward compat, in a chained way that cause all events being rejected.
The proper condition is to reject the events when terminator isn't
in ['\0', ';'] set.
https://bugzilla.gnome.org/show_bug.cgi?id=758151
It would be unusual to have the header segment with an 'edts' atom
indicating gaps at the beginning when handling fragmented streams.
The header usually doesn't contain any timestamping information, this
should come from the playlist/manifest and the segments with media
in those scenarios.
https://bugzilla.gnome.org/show_bug.cgi?id=758171
On POSIX, IP_MULTICAST_LOOP is a setting for the sender socket. On Windows it
is a setting for the receiver socket. As such we will need it on udpsrc too to
allow filtering out our own multicast packets.
In push-mode it is hard to support qt segments overall but it is
possible to support when the file isn't heavily edited but just contain
a segment to indicate a gap at the beginning. This also allows properly
timestamping data that has negative DTS in push-mode.
It is relevant to support those for 2 scenarios:
1) fragmented streaming
2) HTTP playback of 'regular' mp4
https://bugzilla.gnome.org/show_bug.cgi?id=753484
No need to use G_GINT64_FORMAT for potentially negative values of
GstClockTimeDiff. Since 1.6 these can be handled with GST_STIME_ARGS.
Plus it creates more readable values in the logs.
https://bugzilla.gnome.org/show_bug.cgi?id=757480
For the MS/VfW codec ids, we want to write DTS timestamps instead
of PTS because that's what everyone else seems to do (and it's also
how it is in AVI). So for those input formats we use the buffer DTS
instead of the PTS. However, if there's no DTS set but only the PTS
then just take the PTS instead of dropping the input buffer. This
is useful especially for I-frame only codecs like JPEG and huffyuv,
but should also be fine as fallback in general.
Fixes regression with input JPEG frames that only have PTS set on them.
https://bugzilla.gnome.org/show_bug.cgi?id=756967
Instead, delay it until all request pads have been released. This is
because the release_pad() vfunc requires the multiqueue and muxer to
be there in order to release their request pads as well. If those
elements are destroyed earlier, release_pad() does not work, no
pads are released and some resources are leaked.
https://bugzilla.gnome.org/show_bug.cgi?id=753622
We have to reverse all samples in a buffer before processing them to properly
have continuous data from one buffer to another. As a result we will have a
negative applied rate and a rate of 1.0.
Also make sure that input buffers are correctly clipped to the segment,
otherwise our calculations are going to go wrong.
Also copy over the segment event's sequence number to the output segment while
we're at it.
https://bugzilla.gnome.org/show_bug.cgi?id=757033
Implement accept-caps handler to avoid doing a full caps query
downstream to handle it.
This commit implements accept-caps as a simplification of the _getcaps
function, so it exposes the same limitations that getcaps would.
For example, not accepting renegotiation to caps with capsfeatures when
it was last configured to a caps that it has to deinterlace.
If the QtDemuxStream are re-used they may already have caps which used
to be leaked.
Reproduced using the
validate.dash.playback.seek_forward.dash_exMPD_BIP_TC1 validate
scenario.
https://bugzilla.gnome.org/show_bug.cgi?id=756561
Negotiation to audio/x-raw,format=S8 was not possible because S8 does
not have a bit order so we ended up doing `if (!entry.fourcc) goto refuse_caps;`
https://bugzilla.gnome.org/show_bug.cgi?id=756387
They now use the new GstAudioVisualizer base class
from gst-plugins-base/gst-libs/gst/pbutils
Also fixed undefined reference to gst_audio_visualizer_get_type
Added GST_PLUGINS_BASE_LIBS to Makefile.am and re-order LIBADD.
https://bugzilla.gnome.org/show_bug.cgi?id=742875
Add statitics from each rtp source to the rtp session property.
'source-stats' is a GValueArray where each element is a GstStructure of
stats for one rtp source.
The availability of new stats is signaled via g_object_notify.
https://bugzilla.gnome.org/show_bug.cgi?id=752669
Buffer is added to the internal cache, and pushed only when accumulated
buffer duration crosses 200 ms. So when the chain ends, the buffer accumulated
is not freed. Freeing the cache when the state changes from PAUSED to READY.
https://bugzilla.gnome.org/show_bug.cgi?id=754212
By not doing this, the muxer is not effectively a rtpmuxer, rather a
funnel, since it should be a single stream that exists the muxer.
If not specified, take the first ssrc seen on a sinkpad, allowing upstream
to decide ssrc in "passthrough" with only one sinkpad.
Also, let downstream ssrc overrule internal configured one
We hence has the following order for determining the ssrc used by
rtpmux:
0. Suggestion from GstRTPCollision event
1. Downstream caps
2. ssrc-Property
3. (First) upstream caps containing ssrc
4. Randomly generated
https://bugzilla.gnome.org/show_bug.cgi?id=752694
If seeking targets an empty segment skip it as there is no media
offset to get from it. Instead look for the next one.
This doesn't make seeking in push-mode work if you seek to an
empty segment but at least won't get you to wrong offsets.
https://bugzilla.gnome.org/show_bug.cgi?id=753484
mux_start_time refers to the running_time of the buffer
that goes first in the output file. Normally this time is
0, so this variable is initialized to 0 during the state
change to PAUSED.
However, when dealing with dynamic pipelines and starting
a recording while the pipeline has already run for a while,
the running_time of the first buffer is > 0 and this causes
a problem with detecting the end of the first file(s) when
splitting by duration, because the code will later compare
the threshold_time with (last buffer running_time - mux_start_time)
and will get it wrong until mux_start_time advances enough
to make this difference < threshold_time, creating empty files
in the meantime.
https://bugzilla.gnome.org/show_bug.cgi?id=753624
During reverse playback, the media should stop playing at segment.start
This does not happen, and avidemux continues to process data even when
current timestamp is less that segment.start.
https://bugzilla.gnome.org/show_bug.cgi?id=755094
Avoid using default accept-caps handler that will query downstream
and is more expensive. Just check if the caps is compatible with
the template and check if the channels are the same.
Caps from the pad template are being leaked. In any case it is
from a static pad template and will 'leak' in the end, just doing
the cleanup for the good practice.
On reading LOAS config, flag v=1 and vA=1 combination can occur, leading to warning
"Spec says "TBD"...". Returning TRUE on this case while parameters 'sample_rate' and
'channels' are pointing to uninitialized values can end on setting random values as
rate and channels on src caps.
https://bugzilla.gnome.org/show_bug.cgi?id=755611
Otherwise we end up considering the values did not change and we wrongly
work with the old video format (which will lead to wrong
behaviour/segfaults).
https://bugzilla.gnome.org/show_bug.cgi?id=755621