When calling gst_rtp_jitter_buffer_reset you pass in a seqnum.
This is considered the starting-point for a new stream.
However, the old behavior would unref this buffer, basically lying to
the thread that is pushing out buffers saying that it can expect
this buffer, when it would never arrive. The resulting effect being no
more buffer pushed out of the jitterbuffer, and it would buffer
incoming data indefinitely.
By instead inserting the buffer in the gap_packets queue, the _reset()
function will take responsibility for using that as the first buffer
of the new stream.
Fixes#703
In order to concatenate fragments, splitmuxsrc offsets
the start of each fragment PTS to 0 to align it with the
previous file. This means that DTS can go negative for
the first fragment, with really bad results.
Add a fixed offset to outgoing timestamp ranges to
avoid that.
In this mode each field is carried using its own buffer.
Allow deinterlace to negotiate caps with the Interlaced feature and
adjust the algorithm fetching lines.
Fix#620
Output frames used to have their interlace mode set to the same one as
the input. This breaks their field and comp heights when deinterlacing
an alternate stream.
Each FLAC metadata block starts with a flag denoting whether it is the
last metadata block. The existing flacparse code moves any existing
VORBISCOMMENT block to immediately follow the STREAMINFO block without
changing any block's last-metadata-block flag. If no VORBISCOMMENT block
exists, it created one with the last-metadata-block flag set to true.
This results in gstflacdec sometimes giving bad headers to libflac when
trying to play perfectly valid FLAC files depending on the file's
metadata ordering. Depending on the contents of the other metadata
blocks, current versions of libflac may or may not return
FLAC__STREAM_DECODER_ERROR_STATUS_BAD_HEADER when given this broken
metadata. This is most noticeable with files that have a large cover art
image attached where VORBISCOMMENT is the very last metadata block with
no PADDING afterwards.
This patch changes that behavior so that:
1. For FLAC files that already have a VORBISCOMMENT block, the metadata
order is preserved.
2. For FLAC files that do not have a VORBISCOMMENT block, the generated
dummy VORBISCOMMENT is placed immediately after STREAMINFO and
inherits the last-metadata-block flag from STREAMINFO.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/484
When using the testclock for determining clock in test, it is sometimes observed
that the clock entry is not registered in time by the aggregator. So deadlock occurs
between the aggregator and the test thread.
Sometimes the headers contain useless, wrong or zero values for e.g. the
sample size with these formats. There's only a single valid value for
them so let's set these instead.
We do not have a way to know the format modifiers to use with string
functions provided by the system. G_GUINT64_FORMAT and other string
modifiers only work for glib string formatting functions. We cannot
use them for string functions provided by the stdlib. See:
https://developer.gnome.org/glib/stable/glib-Basic-Types.html#glib-Basic-Types.description
```
../gst/rtpmanager/gstrtpjitterbuffer.c: In function 'gst_jitter_buffer_sink_parse_caps':
../gst/rtpmanager/gstrtpjitterbuffer.c:1523:32: error: unknown conversion type character 'l' in format [-Werror=format=]
|| sscanf (mediaclk, "direct=%" G_GUINT64_FORMAT, &clock_offset) != 1)
^~~~~~~~~~
In file included from /home/nirbheek/cerbero/build/dist/windows_x86/include/glib-2.0/glib/gtypes.h:32,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/glib-2.0/glib/galloca.h:32,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/glib-2.0/glib.h:30,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/gstreamer-1.0/gst/gst.h:27,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/gstreamer-1.0/gst/rtp/gstrtpbuffer.h:27,
from ../gst/rtpmanager/gstrtpjitterbuffer.c:108:
/home/nirbheek/cerbero/build/dist/windows_x86/lib/glib-2.0/include/glibconfig.h:69:28: note: format string is defined here
#define G_GUINT64_FORMAT "llu"
^
../gst/rtpmanager/gstrtpjitterbuffer.c:1523:32: error: too many arguments for format [-Werror=format-extra-args]
|| sscanf (mediaclk, "direct=%" G_GUINT64_FORMAT, &clock_offset) != 1)
^~~~~~~~~~
```
See also: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/379
Previously we saved the buffer_timestamp straight into
mux->cluster_time. Since the cluster time saved into the file does not
have as high precision as GstClockTime depending on the timecodescale
the rounding of relative_timestamp was invalid as mux->cluster_time
which it was calculated relative to was not equal to the cluster time
written to the matroska file.
Example of "mkvinfo -v" of how it looks before and after this change in
an scenario where previously timestamps got out of order because of this
issue.
Notice the timestamp of the SimpleBlock right before and right after the
Cluster now being in order. The consequence of this however is that the
cluster timestamp is not necessarily the same as the timestamp of the
first buffer in the cluster however (in case it's rounded up).
Before
| + SimpleBlock (track number 1, 1 frame(s), timecode 126.922s = 00:02:06.922)
| + Frame with size 432
| + SimpleBlock (track number 2, 1 frame(s), timecode 126.933s = 00:02:06.933)
| + Frame with size 329
| + SimpleBlock (track number 2, 1 frame(s), timecode 126.955s = 00:02:06.955)
| + Frame with size 333
|+ Cluster
| + Cluster timecode: 126.954s
| + Cluster previous size: 97344
| + SimpleBlock (key, track number 1, 1 frame(s), timecode 126.954s = 00:02:06.954)
| + Frame with size 61239
| + SimpleBlock (track number 2, 1 frame(s), timecode 126.975s = 00:02:06.975)
| + Frame with size 338
After
| + SimpleBlock (track number 1, 1 frame(s), timecode 135.456s = 00:02:15.456)
| + Frame with size 2260
| + SimpleBlock (track number 2, 1 frame(s), timecode 135.468s = 00:02:15.468)
| + Frame with size 332
| + SimpleBlock (track number 2, 1 frame(s), timecode 135.490s = 00:02:15.490)
| + Frame with size 335
|+ Cluster
| + Cluster timecode: 135.489s
| + Cluster previous size: 158758
| + SimpleBlock (key, track number 1, 1 frame(s), timecode 135.490s = 00:02:15.490)
| + Frame with size 88070
| + SimpleBlock (track number 2, 1 frame(s), timecode 135.511s = 00:02:15.511)
| + Frame with size 336
Disable session sharing and cookie jar when cookies property is set.
The cookie jar actually replaces or removes any existing Cookie header
set on the message, so the cookies property was effectively being
ignored. There doesn't appear to be a way to inject the cookies into the
jar without having to specify matching domains etc., so it's not
possible to simulate the old behaviour of unconditionally sending the
cookies with all messages, besides simply disabling the cookie jar.
Comparing gst_rtspsrc_loop_interleaved and gst_rtspsrc_loop_udp, and investigating on timeout issues, it sounds like a piece of code has been originally copied from udp to the interleaved one. The timeout variable is never used inside the interleaved one. No side effect has been seen in the removed function calls.
The debug message removed is pointless as the timeout used is "src->tcp_timeout" that is fixed.
The presence of the two timeout drove my team in investigating if the reference to the tcp_timeout was correct (it is). Hence we removed the misleading reference to the local timeout variable.
The VP Codec Configuration Box (vpcC) contains vp9 profile and
colorimetry information. Especially the profile information might
be useful for downstream to select capable decoder element.
For live streams, if we keep the stream for a long time, the timestamp
will be larger than max_uint32. In that case, timestamp should be handled
as a rollover timestamp rather than a backward timestamp.
* Organize GstRtpFunnelPad and GstRtpFunnel separately
* Use G_GNUC_UNUSED instead of (void) casts
* Don't call an event "caps"
* Use semicolons after GST_END_TEST (helps gst-indent)
Instead of having chunks with one sample per raw audio sample, have
chunks with a single sample that contains lots of raw audio samples. If
necessary these are still split again later when reading the stream.
With this we are allocating a lot less memory for the parsed sample
tables and can play files that previously triggered our limit of 200MB
for the sample table. For example, one file here would previously
allocate 3.5GB for the sample table and now only allocates 70KB.
Outputting 48000 buffers per second is not a good idea performance-wise.
If a container sample is less than 1024 raw audio frames, combine
multiple samples to get at least 1024 raw audio samples as long as
they're stored contiguous in the file.
For the other direction, if a container sample contains more than 4096
samples there is already code for splitting them up.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=692750
When the server replies with a range "now-", it is presumed to
be a "live" stream and we should request a similar range.
This was the case prior to my refactoring to make use of
gst_rtsp_range_to_string in 5f1a732bc7,
this commit restores the behaviour for that case.
The proper way of capping on max-streams is to do it in rtpssrcdemux.
This patch uses the newly introduced property on rtpssrcdemux. Previous
behavior would not prevent rtpssrcdemux spawning new pads for every new
ssrc and potentialy causing performance trouble during teardown.
When used for processing bundled media streams within rtpbin the rtpssrcdemux element may
receive bad RTP and RTCP packets, these should not be treated as a fatal error.
The property is useful against atacks when the sender changes SSRC for
every RTP packet. The property with the same name introduced in rtpbin
was not enough, because we still can end up with thousands of pads
allocated in rtpssrcdemux.
gstrtspsrc uses a queue, set_get_param_q, to store set param and get
param requests. The requests are put on the queue by calling
get_parameters() and set_parameter(). A thread which executs in
gst_rtspsrc_thread() then pops requests from the queue and processes
them. The crash occured because the queue became empty and a NULL
request object was then used. The reason that the queue became empty
is that it was popped even when the thread was NOT processing a get
parameter or set parameter command. The fix is to make sure that the
queue is ONLY popped when the command being processed is a set
parameter or get parameter command.
gst_v4l2_object_set_format_full() was returning FALSE without setting
an error. Caller code (gst_v4l2src_fixate()) was then derefing a
NULL pointer when trying to handle the error.
If not configuring the sinks via the "location" property this can be
useful to know for which sink the fragment was actually opened/closed,
especially if finalization of the fragments is happening asynchronously.
When connected to an upstream rtpfunnel element, payload-type,
ssrc and clock-rate will not be present in the received caps.
rtprtxsend can already deal with only the clock rate being
present there, a new property is exposed to allow users to
provide a payload-type -> clock-rate map, this enables the
use of the max-size-time property for bundled streams.
In Google webrtc, the setting VP8E_SET_STATIC_THRESHOLD is set to 1
(except when the content is known to be static very often in which
case it is set to 100, i.e. when sharing screen with Google Hangouts).
The cpu usage drops a lot when using 1 for above setting because it
allows the encoder to skip static/low content blocks. The current
0 default value uses too much cpu and confuses the user regarding
the cpu usage expectations. User expects vp8enc to use low cpu by
default.
Documentation of VP8E_SET_STATIC_THRESHOLD:
https://github.com/webmproject/libvpx/blob/master/vpx/vp8cx.h#L188
chromium/webrtc:
b484ec0082/modules/video_coding/codecs/vp8/libvpx_vp8_encoder.cc (822)Closes#58