Add property to set the initial value for picture-id. RFC7741 says
that picture-id MAY be initialized to a random value, thus it's also
valid to simply set it to a fixed initial value. A fixed value is very
useful for testing.
Default behavior is not changed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/728>
In order to support the symbol g_enum_to_string in various
project using GStreamer ( gst-validate etc.), the glib minimum
version should be 2.56.0.
Remove compat code as glib requirement
is now > 2.56
Version used by Ubuntu 18.04 LTS
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/774>
Scenario:
- gap event causes h264parse to push made up caps that may fail checks
inside qtmux (e.g missing codec_data).
- the caps event has already been marked as received and is sticky on
the sink pad
- gst_qt_mux_pad_can_renegotiate() will retrieve the failed caps event
using gst_pad_get_current_caps() and reject the correct updated caps
with codec_data.
- Failure!
Keep track of the configured caps ourselves instead of relying on the
sticky event on the pad.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/732>
Since !348, the jitterbuffer was only removed with the session. This restores
the original behaviour and removes the jitterbuffer when the stream is
removed. This avoid accumulating jitterbuffer objects into the bin when a
session is reused.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/735>
The rtpjitterbuffer is now part of the session elements, we no longer need
to do the ref_sink dance when signalling it. It is already owned by the bin
when signalled. Also, the code that handles generic session elements already
handle the ref_sink() calls since:
03dc22951b
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/735>
If we have not received a FU with a start bit set, any subsequent FU
data is not useful at all and would result in an invalid stream.
This case is constructed from multiple requirements in
RFC 3984 Section 5.8 and RFC 7798 Section 4.4.3. Following are excerpts
from RFC 3984 but RFC 7798 contains similar language.
The FU in a single FU case is forbidden:
A fragmented NAL unit MUST NOT be transmitted in one FU; i.e., the
Start bit and End bit MUST NOT both be set to one in the same FU
header.
and dropping is possible:
If a fragmentation unit is lost, the receiver SHOULD discard all
following fragmentation units in transmission order corresponding to
the same fragmented NAL unit.
The jump in seqnum case is supported by this from the specification
instead of implementing the forbidden_zero_bit mangling:
If a fragmentation unit is lost, the receiver SHOULD discard all
following fragmentation units in transmission order corresponding to
the same fragmented NAL unit.
A receiver in an endpoint or in a MANE MAY aggregate the first n-1
fragments of a NAL unit to an (incomplete) NAL unit, even if fragment
n of that NAL unit is not received. In this case, the
forbidden_zero_bit of the NAL unit MUST be set to one to indicate a
syntax violation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/730>
Used by some proprietary software for their fragmented files.
Adds some support for multi-stream fragmented files
Flow is as follows.
1. The first 'fragment' is written as a self-contained fragmented
mdat+moov complete with an edit list and durations, tags, etc.
2. Subsequent fragments are written with a mdat+moof and each stream is
interleaved as data arrives (currently ignoring the interleave-*
properties). data-offsets in both the traf and the trun ensure
data is read from the correct place on demuxing. Data/chunk offsets
are also kept for writing out the final moov.
3. On finalisation, the initial moov is invalidated to a hoov and the
size of the first mdat is extended to cover the entire file contents.
Then a moov is written as regularly would in moov-at-end mode (the
default).
This results in a file that is playable throughout while leaving a
finalised file on completion for players that do not understand
fragmented mp4.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/643>
When we are fragmented, the edit list may only refer to the portion of
the media that is in the moov. Extend the edit list stop time when we
if there is only one qt segment and we are reading a fragmented file.
Fixes playback of some fragmented mp4 files generated by proprietary
programs.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/643>
This is modelled after the DASH Common Encryption scheme, but is somewhat
simpler as more parts are fixed, i.e. just one encryption scheme.
The output caps are fixed to 'application/x-aavd'. All information
required for decryption are part of the 'adrm' atom, which is passed
on as a property. The property is attached to the buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/577>
The 'aavd' box is contained in the 'stsd' sample description. The 'aavd'
box follows the layout of an 'mp4a' entry, i.e. it contains a single
standard 'esds' extension box, and the two proprietary 'adrm' and 'aabd'
extension boxes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/577>
- Release the split mux lock while removing the probes
- Flush the sinkpad to unblock other pads
- Turn check_completed_gop into a do while statement, when
waking up we want to recheck whether the current GOP is
ready for sending
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/719>
The code seems to use `continue` and `break` as if both refer to the
surrounding `while` loop. But because `break` breaks out of the
`switch`, they actually have the same effect.
This may have caused the loop not to terminate when it should. E.g. when
`skip_backwards_streams` drops a buffer we should abort the aggregation
and wait for all pads to be filled again. Instead, we might have just
selected a subsequent pad as our new "best".
Replace `break` with `done = TRUE; break`, and `continue` with `break`.
Then simplify the code a bit.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/710>
When we are dropping a buffer in find_best_pad (e.g. waiting for a
keyframe, or skipping backwards timestamp), return
GST_AGGREGATOR_FLOW_NEED_DATA to make sure we have enough data at the
next run. Otherwise, a stream that accidentally fell behind (e.g.
relinking race, or just waiting for a keyframe) will never get the
opportunity to catch up to the other one, because the other one will
always keep advancing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/696>
* Trying to disconnect a stream from a running splitmuxsink by flushing
it results in the FLUSH_START blocking in the stream queue's
gst_pad_pause_task because the flush did not unblock
complete_or_wait_on_out, so add a check for ctx->flushing there.
* Add a GST_SPLITMUX_BROADCAST_INPUT so check_completed_gop notices
flushing changed and the incoming push is unblocked.
* Pass the FLUSH_STOP along to the muxer without waiting.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/687>
GCC 10 was complaining like following. It really is complaining about default cases returning
with potentially unitialized *desval, but those cases in the switch should never be hit.
```
../subprojects/gst-plugins-good/gst/auparse/gstauparse.c: In function 'gst_au_parse_chain':
../subprojects/gst-plugins-good/gst/auparse/gstauparse.c:481:37: error: 'timestamp' may be used uninitialized in this function [-Werror=maybe-uninitialized]
481 | GST_BUFFER_TIMESTAMP (outbuf) = timestamp;
../subprojects/gst-plugins-good/gst/auparse/gstauparse.c:482:36: error: 'duration' may be used uninitialized in this function [-Werror=maybe-uninitialized]
482 | GST_BUFFER_DURATION (outbuf) = duration;
../subprojects/gst-plugins-good/gst/auparse/gstauparse.c:480:34: error: 'offset' may be used uninitialized in this function [-Werror=maybe-uninitialized]
480 | GST_BUFFER_OFFSET (outbuf) = offset;
cc1: all warnings being treated as errors
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/671>
This identifiers are registered in the MPEG-RA and defined
to be used by the Dolby Vision AVC/HEVC streams.
This is a first step to present the stream to the decoder.
Additional box parsing of DOVIConfigurationBox is necessary
to complete the media presentation with proper Dolby Vision
enhancements.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/658>
Previously, the user input for stsd entries is trusted completely, and
so a maliciously crafted file could choose the length of the stsd
entries arbitrarily and cause qtdemux to try to allocate up to 2GB of
memory (half of a 32 bit max int).
This patch fixes this by sanity checking the stsd input against the
size of the entire stsd atom.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/670>
During trak parsing, we need to check for the existence of stsd_entries,
otherwise, we end up with a NULL pointer to them. It is entirely
possible for the stsd to exist, but for it to have no entries, which the
previous checks did not take into account.
This patch adds a simply check to ensure that all files that do not
contain a stsd entry are deemed corrupt, and adds a test case to prevent
a regression.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/670>
Previously imagefreeze would always operate as non-live element and
output frames as fast as possible according to the configured segment
(via SEEK events) and the negotiated framerate from start to stop or the
other way around.
With the new live mode (enabled via the is-live property) it would only
output frames in PLAYING. Frames would be output according to the
negotiated framerate unless it would be too late, in which case it would
jump ahead and skip over the requirement amount of frames.
This makes it possible to actually use imagefreeze in live pipelines
without having to manually ensure somehow that it would start outputting
at the current running time and without still risking to fall behind
without recovery.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/653>
We never run as a live element, even if upstream is live, and never
output any buffers with latency but immediately generate buffers as
fast as we can according to the negotiated framerate.
Passing the query upstream would proxy whatever mode of operation
upstream has, which has nothing to do with how we produce buffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/653>
Move the SVMI stereoscopic atom parsing out to a helper
function to shrink qtdemux_parse_trak a bit.
Add a bounds check that the received atom is large enough
before parsing it.
Add a note to the atom parser that svmi comes from the
MPEG-A spec 23000-11.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/634>
Until now, do_expected_timeout() was shortly dropping the JBUF_LOCK in order
to push RTX event event without causing deadlock. As a side effect, some
CPU hung would happen as the timerqueue would get filled while looping over
the due timers. To mitigate this, we were processing the lost timer first and
placing into a queue the remainign to be processed later.
In the gap caused by an unlock, we could endup receiving one of the seqnum
present in the pending timers. In that case, the timer would not be found and
a new one was created. When we then update the expected timer, the seqnum
would already exist and the updated timer would be lost.
In this patch we remove the unlock from do_expected_timeout() and place all
pending RTX event into a queue (instead of pending timer). Then, as soon as
we have selected a timer to wait (or if there is no timer to wait for) we send
all the upstream RTX events. As we no longer unlock, we no longer need to pop
more then one timer from the queue, and we do so with the lock held, which
blocks any new colliding timers from being created.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/616>
When a GST_EVENT_FLUSH_START reaches the jitterbuffer, there is a chance that
our task is currently blocking waiting for a timer.
There was two problems:
* That wait wasn't checking for flushing situations
* The flushing handling wasn't waking up that conditional (to check whether it
should abort)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/608>
In some algorithms (like yadif), the Y plane has to be handled different
than the UV plane. Therefore, the planar_y functions are now called for
the Y plane, and the nv12/nv21 functions are handling only the UV/VU
planes respectively.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/444>
Switching the deinterlacing mode on-the-fly from disabled to
auto used to work, but was broken by commit #1f21747c some
years ago.
Force re-negotiation with downstream when the mode or
fields properties are changed, otherwise deinterlace
never switches out of the passthrough mode.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/584>
First of all get rid of the atomic seeking boolean, which was only ever
set and never read. Replace it with a flushing boolean that is used in
the loop function to distinguish no buffer because of flushing and no
buffer because of an error as otherwise we could end up in a
GST_FLOW_ERROR case during flushing.
Also only reset the state of imagefreeze in flush-stop when all
processing is stopped instead of doing it as part of flush-start.
And last, get a reference to the imagefreeze buffer in the loop function
in the very beginning and work from that as otherwise it could in theory
be replaced or set to NULL in the meantime as we release and re-take the
mutex a couple of times during the loop function.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/580>
Backwards timestamps confuse librtmp, even if they're only backwards
relative to the other stream. If the timestamp of a stream is going
backwards related to the other stream, this property allows the muxer to
skip a few buffers until it reaches the timestamp of the other stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/572>
Allows us to request pads after writing header for streamable flv's.
For non-streamable it doesn't make sense to request a new pad after
writing the header, because the headers have been written already and we
can't add the new stream. But for streamable, any clients that connect
after the new pad has been added will be able to see both streams.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/572>
As we override the GLib item with our own structure, we cannot use any
function from GList or GQueue that would try to free the RTPJitterBufferItem.
In this patch, we move away from g_queue_new() which forces using
g_queue_free(). This this function could use g_slice_free() if there is any items
left in the queue. Passing the wrong size to GSLice may cause data corruption
and crash.
A better approach would be to use a proper intrusive linked list
implementation but that's left as an exercise for the next person
running into crashes caused by this.
Be ware that this regression was introduced 6 years ago in the following
commit [0], the call to flush() looked useless, as there was a g_queue_free()
afterward.
Signed-off-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
[0] 479c7642fd
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/573>
The calculated threshold for timecode might be varying depending on
"max-size-timecode" and framerate.
For instance, with framerate 29.97 (30000/1001) and
"max-size-timecode=00:02:00;02", every fragment will have identical
number of frames 3598. However, when "max-size-timecode=00:02:00;00",
calculated next keyframe via gst_video_time_code_add_interval()
can be different per fragment, but this is the nature of timecode.
To compensate such timecode drift, we should keep track of expected
timecode of next fragment based on observed timecode.
The problem was this:
Due to the highly irregular arrival of RTX-packet the max-misorder variable
could be pushed very low. (-10).
If you then at some point get a big in the sequence-numbers (62 in the
test) you end up sending RTX-requests for some of those packets, and then
if the sender answers those requests, you are going to get a bunch of
RTX-packets arriving. (-13 and then 5 more packets in the test)
Now, if max-misorder is pushed very low at this point, these RTX-packets
will trigger the handle_big_gap_buffer() logic, and because they arriving
so neatly in order, (as they would, since they have been requested like
that), the gst_rtp_jitter_buffer_reset() will be called, and two things
will happen:
1. priv->next_seqnum will be set to the first RTX packet
2. the 5 RTX-packet will be pushed into the chain() function
However, at this point, these RTX-packets are no longer valid, the
jitterbuffer has already pushed lost-events for these, so they will now
be dropped on the floor, and never make it to the waiting loop-function.
And, since we now have a priv->next_seqnum that will never arrive
in the loop-function, the jitterbuffer is now stalled forever, and will
not push out another buffer.
The proposed fixes:
1. Don't use RTX in calculation of the packet-rate.
2. Don't use RTX in large-gap logic, as they are likely to be dropped.
If the start of the GOP is >= the requested running time, put it into a
new fragment. That is, split-at-running-time would always ensure that a
split happens as early as possible after the given running time.
Previously it was comparing against the current incoming timestamp,
which does not tell us what we actually want to know as it has no direct
relation to the GOP start/end.
This is disabled by default as it unnecessarily creates bigger headers
but it is something that is required by some applications and most
notably the Apple ProRes spec.
Request pads can released at any time, so make sure to hold
the object lock when iterating the element sinkpads list where
that's safe, or to use other safe pad iteration patterns in
other places.
When choosing a best pad, return a reference to the pad to make sure it
stays alive for output in the aggregator srcpad task.
Should fix a spurious valgrind error in the CI flvmux tests and some
other potential problems if the request sink pads are released while
the element is running..
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/714
Even if timecode trak is officially unsupported in non-mov flavors,
some software still supports it, e.g. Final Cut Pro X:
https://developer.apple.com/library/archive/technotes/tn2174/_index.html
The user might still expect to see the timecode information in the
non-mov file despite it being officially unsupported , because other
software e.g. QuickTime will create a timecode trak even in mp4 files.
Furthermore, software that supports timecode trak in non-mov flavors
will also display the file duration in "timecode units" instead of real
clock time, which is not necessarily the same for 29.97 fps and friends.
This might confuse users, who see a different duration for the same
framerate and amount of frames depending on whether the container is mp4
or mov.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/512
gst_buffer_map () results in memcopying when a GstBuffer contains
more than one GstMemory.
This has quite an impact on performance on systems with limited amount
of resources. With this patch the whole GstBuffer will not be mapped at
once, instead each individual GstMemory will be iterated and mapped
separately.
There is a use-case for a server to re-payload opus going through it.
Problem was that the payloader requires channels in the caps, but
this is not something the depayloader can parse out of the stream, meaning
caps-negotiation would fail.
Removing the requirement of channels in the template-caps fixes this.
RTP session starts a new thread for RTCP and names it
"rtpsession-rtcp-thread" which happens to be longer than the maximum 16B
allowed by pthread_setname_np and causes the naming to fail.
See docs for more details.
This commit simply shortens the thread's name so it can actually be set.
Changing the types from boolean to guint due to the ++ operand used on
them, and only call JBUF_SIGNAL_QUEUE after settling down,
or else you end up signaling the waiting code in chain() for every buffer
pushed out.
When switching the splitmuxsrc state back to NULL quickly, it
can encounter deadlocks shutting down the part readers that
are still starting up, or encounter a crash if the splitmuxsrc
cleaned up the parts before the async callback could run.
Taking the state lock to post async-start / async-done messages can
deadlock if the state change function is trying to shut down the
element, so use some finer grained locks for that.
The queued time includes the duration of the last queued frame
(i.e., new keyframe) so the condition check should not be inclusive.
Note that the new fragment will be cut excluding the last frame
and therefore if the condition is inclusive way,
the fragment might have one frame shorter duration for all keyframe
stream such as jpeg or all-inter video streams.
Since the commit 94bb76b6b9, splitmuxsink
will split fragments based on queued time and the threshold of that.
So don't need to store the next timecode for split decision.
Not only the requested keyframe time, the queued size should be
a criterion for the split decision of timecode based mode
(same as max-size-time based split case).
This is a concept that only applies when a buffer arrives in the chain
function, and it has already been scheduled as part of a "multi"-lost
timer.
However, "multi"-lost timers are now a thing of the past, making this
whole concept superflous, and this buffer is now simply counted as "late",
having already been pushed out (albeit as a lost-event).
There is a problem with the code today, where a single timer will
be scheduled for a series of lost packets, and then if the first packet
in that series arrives, it will cause a rescheduling of that timer, going
from a "multi"-timer to a single-timer, causing a lot of the packets
in that timer to be unaccounted for, and creating a situation in where
the jitterbuffer will never again push out another packet.
This patch solves the problem by instead of scheduling those lost packets
as another timer, it instead asks to have that lost-event pushed straight
out.
This very much goes with the intent of the code here: These packets are
so desperately late that no cure exists, and we might as well get the
lost-event out of the way and get on with it.
This change has some interesting knock-on effect being presented in
later commits. It completely removes the concept of "already-lost", so
that is why that test has been disabled in this commit, to be
removed later.
This should result in no worse accuracy than the base parse element, and may
result in better accuracy. In particular, the number of bytes processed at any
given point, as accumulated by baseparse, can be only accurate to
(1 / # of frames) bytes per second, and if we try to seek immediately after
pausing the pipeline to a large offset, this small inaccuracy can propagate to
something noticeable.
The use case that prompted this patch is a 45-minute MPEG-1 layer 3 file, which
has a constant bit rate but no seek tables. Trying to seek the pipeline
immediately after pauisng it, without the ACCURATE flag, to a location 41
minutes in, yields a location that is, even with <https://gitlab.freedesktop.org/gstreamer/gstreamer/merge_requests/374>,
still audibly incorrect. This patch yields a much closer position, no longer
audibly incorrect, and likely within a frame of the most correct position.
By the time sink_event is called, the pad's current caps have
already been updated. To address this, implement sink_event_pre_queue,
and check if the pad can be renegotiated there.
Fixes#707
- Refactored the planar transform method to support all video formats
that are stored planar, independent of the used subsampling
- Added support for Y444
gst_buffer_map () results in memcopying when a GstBuffer contains
more than one GstMemory and when AVC (length-prefixed) alignment is used.
This has quite an impact on performance on systems with limited amount of
resources. With this patch the whole GstBuffer will not be mapped at once,
instead each individual GstMemory will be iterated and mapped separately.
When calling gst_rtp_jitter_buffer_reset you pass in a seqnum.
This is considered the starting-point for a new stream.
However, the old behavior would unref this buffer, basically lying to
the thread that is pushing out buffers saying that it can expect
this buffer, when it would never arrive. The resulting effect being no
more buffer pushed out of the jitterbuffer, and it would buffer
incoming data indefinitely.
By instead inserting the buffer in the gap_packets queue, the _reset()
function will take responsibility for using that as the first buffer
of the new stream.
Fixes#703
In order to concatenate fragments, splitmuxsrc offsets
the start of each fragment PTS to 0 to align it with the
previous file. This means that DTS can go negative for
the first fragment, with really bad results.
Add a fixed offset to outgoing timestamp ranges to
avoid that.
In this mode each field is carried using its own buffer.
Allow deinterlace to negotiate caps with the Interlaced feature and
adjust the algorithm fetching lines.
Fix#620
Output frames used to have their interlace mode set to the same one as
the input. This breaks their field and comp heights when deinterlacing
an alternate stream.
Each FLAC metadata block starts with a flag denoting whether it is the
last metadata block. The existing flacparse code moves any existing
VORBISCOMMENT block to immediately follow the STREAMINFO block without
changing any block's last-metadata-block flag. If no VORBISCOMMENT block
exists, it created one with the last-metadata-block flag set to true.
This results in gstflacdec sometimes giving bad headers to libflac when
trying to play perfectly valid FLAC files depending on the file's
metadata ordering. Depending on the contents of the other metadata
blocks, current versions of libflac may or may not return
FLAC__STREAM_DECODER_ERROR_STATUS_BAD_HEADER when given this broken
metadata. This is most noticeable with files that have a large cover art
image attached where VORBISCOMMENT is the very last metadata block with
no PADDING afterwards.
This patch changes that behavior so that:
1. For FLAC files that already have a VORBISCOMMENT block, the metadata
order is preserved.
2. For FLAC files that do not have a VORBISCOMMENT block, the generated
dummy VORBISCOMMENT is placed immediately after STREAMINFO and
inherits the last-metadata-block flag from STREAMINFO.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/484
Sometimes the headers contain useless, wrong or zero values for e.g. the
sample size with these formats. There's only a single valid value for
them so let's set these instead.
We do not have a way to know the format modifiers to use with string
functions provided by the system. G_GUINT64_FORMAT and other string
modifiers only work for glib string formatting functions. We cannot
use them for string functions provided by the stdlib. See:
https://developer.gnome.org/glib/stable/glib-Basic-Types.html#glib-Basic-Types.description
```
../gst/rtpmanager/gstrtpjitterbuffer.c: In function 'gst_jitter_buffer_sink_parse_caps':
../gst/rtpmanager/gstrtpjitterbuffer.c:1523:32: error: unknown conversion type character 'l' in format [-Werror=format=]
|| sscanf (mediaclk, "direct=%" G_GUINT64_FORMAT, &clock_offset) != 1)
^~~~~~~~~~
In file included from /home/nirbheek/cerbero/build/dist/windows_x86/include/glib-2.0/glib/gtypes.h:32,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/glib-2.0/glib/galloca.h:32,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/glib-2.0/glib.h:30,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/gstreamer-1.0/gst/gst.h:27,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/gstreamer-1.0/gst/rtp/gstrtpbuffer.h:27,
from ../gst/rtpmanager/gstrtpjitterbuffer.c:108:
/home/nirbheek/cerbero/build/dist/windows_x86/lib/glib-2.0/include/glibconfig.h:69:28: note: format string is defined here
#define G_GUINT64_FORMAT "llu"
^
../gst/rtpmanager/gstrtpjitterbuffer.c:1523:32: error: too many arguments for format [-Werror=format-extra-args]
|| sscanf (mediaclk, "direct=%" G_GUINT64_FORMAT, &clock_offset) != 1)
^~~~~~~~~~
```
See also: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/379
Previously we saved the buffer_timestamp straight into
mux->cluster_time. Since the cluster time saved into the file does not
have as high precision as GstClockTime depending on the timecodescale
the rounding of relative_timestamp was invalid as mux->cluster_time
which it was calculated relative to was not equal to the cluster time
written to the matroska file.
Example of "mkvinfo -v" of how it looks before and after this change in
an scenario where previously timestamps got out of order because of this
issue.
Notice the timestamp of the SimpleBlock right before and right after the
Cluster now being in order. The consequence of this however is that the
cluster timestamp is not necessarily the same as the timestamp of the
first buffer in the cluster however (in case it's rounded up).
Before
| + SimpleBlock (track number 1, 1 frame(s), timecode 126.922s = 00:02:06.922)
| + Frame with size 432
| + SimpleBlock (track number 2, 1 frame(s), timecode 126.933s = 00:02:06.933)
| + Frame with size 329
| + SimpleBlock (track number 2, 1 frame(s), timecode 126.955s = 00:02:06.955)
| + Frame with size 333
|+ Cluster
| + Cluster timecode: 126.954s
| + Cluster previous size: 97344
| + SimpleBlock (key, track number 1, 1 frame(s), timecode 126.954s = 00:02:06.954)
| + Frame with size 61239
| + SimpleBlock (track number 2, 1 frame(s), timecode 126.975s = 00:02:06.975)
| + Frame with size 338
After
| + SimpleBlock (track number 1, 1 frame(s), timecode 135.456s = 00:02:15.456)
| + Frame with size 2260
| + SimpleBlock (track number 2, 1 frame(s), timecode 135.468s = 00:02:15.468)
| + Frame with size 332
| + SimpleBlock (track number 2, 1 frame(s), timecode 135.490s = 00:02:15.490)
| + Frame with size 335
|+ Cluster
| + Cluster timecode: 135.489s
| + Cluster previous size: 158758
| + SimpleBlock (key, track number 1, 1 frame(s), timecode 135.490s = 00:02:15.490)
| + Frame with size 88070
| + SimpleBlock (track number 2, 1 frame(s), timecode 135.511s = 00:02:15.511)
| + Frame with size 336
Comparing gst_rtspsrc_loop_interleaved and gst_rtspsrc_loop_udp, and investigating on timeout issues, it sounds like a piece of code has been originally copied from udp to the interleaved one. The timeout variable is never used inside the interleaved one. No side effect has been seen in the removed function calls.
The debug message removed is pointless as the timeout used is "src->tcp_timeout" that is fixed.
The presence of the two timeout drove my team in investigating if the reference to the tcp_timeout was correct (it is). Hence we removed the misleading reference to the local timeout variable.
The VP Codec Configuration Box (vpcC) contains vp9 profile and
colorimetry information. Especially the profile information might
be useful for downstream to select capable decoder element.
For live streams, if we keep the stream for a long time, the timestamp
will be larger than max_uint32. In that case, timestamp should be handled
as a rollover timestamp rather than a backward timestamp.
* Organize GstRtpFunnelPad and GstRtpFunnel separately
* Use G_GNUC_UNUSED instead of (void) casts
* Don't call an event "caps"
* Use semicolons after GST_END_TEST (helps gst-indent)
Instead of having chunks with one sample per raw audio sample, have
chunks with a single sample that contains lots of raw audio samples. If
necessary these are still split again later when reading the stream.
With this we are allocating a lot less memory for the parsed sample
tables and can play files that previously triggered our limit of 200MB
for the sample table. For example, one file here would previously
allocate 3.5GB for the sample table and now only allocates 70KB.
Outputting 48000 buffers per second is not a good idea performance-wise.
If a container sample is less than 1024 raw audio frames, combine
multiple samples to get at least 1024 raw audio samples as long as
they're stored contiguous in the file.
For the other direction, if a container sample contains more than 4096
samples there is already code for splitting them up.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=692750
When the server replies with a range "now-", it is presumed to
be a "live" stream and we should request a similar range.
This was the case prior to my refactoring to make use of
gst_rtsp_range_to_string in 5f1a732bc7,
this commit restores the behaviour for that case.
The proper way of capping on max-streams is to do it in rtpssrcdemux.
This patch uses the newly introduced property on rtpssrcdemux. Previous
behavior would not prevent rtpssrcdemux spawning new pads for every new
ssrc and potentialy causing performance trouble during teardown.
When used for processing bundled media streams within rtpbin the rtpssrcdemux element may
receive bad RTP and RTCP packets, these should not be treated as a fatal error.
The property is useful against atacks when the sender changes SSRC for
every RTP packet. The property with the same name introduced in rtpbin
was not enough, because we still can end up with thousands of pads
allocated in rtpssrcdemux.
gstrtspsrc uses a queue, set_get_param_q, to store set param and get
param requests. The requests are put on the queue by calling
get_parameters() and set_parameter(). A thread which executs in
gst_rtspsrc_thread() then pops requests from the queue and processes
them. The crash occured because the queue became empty and a NULL
request object was then used. The reason that the queue became empty
is that it was popped even when the thread was NOT processing a get
parameter or set parameter command. The fix is to make sure that the
queue is ONLY popped when the command being processed is a set
parameter or get parameter command.
If not configuring the sinks via the "location" property this can be
useful to know for which sink the fragment was actually opened/closed,
especially if finalization of the fragments is happening asynchronously.
When connected to an upstream rtpfunnel element, payload-type,
ssrc and clock-rate will not be present in the received caps.
rtprtxsend can already deal with only the clock rate being
present there, a new property is exposed to allow users to
provide a payload-type -> clock-rate map, this enables the
use of the max-size-time property for bundled streams.
ffmpeg is doing the same and various files in the wild have bogus
information in the sample description if the same information is also
duplicated afterwards in the v1/v2 sound sample desription.
Previously we only did this for non-raw audio due to
https://bugzilla.gnome.org/show_bug.cgi?id=374914
but this specific file is already worked around differently. It still
works after this change.
Also remove ad-hoc GST_READ_DOUBLE_BE re-implementation and move the
switch for legacy audio formats after reading all the sample
descriptions as we want to override the values from there.
By default imagefreeze will still reject new buffers after the first one
and immediately return GST_FLOW_EOS but the new allow-replace property
allows to change this.
Whenever updating the buffer we now also keep track of the configured
caps of the buffer and from the source pad task negotiate correctly
based on the potentially updated caps.
Only the very first time negotiation of a framerate with downstream is
performed, afterwards only the caps themselves apart from the framerate
are updated.
Elements emitting frames through several srcpads should use a
flow combiner to aggregate the chain returns and therefore only return
GST_FLOW_NOT_LINKED to upstream when all the downstream pads have
received GST_FLOW_NOT_LINKED.
In addition to that, in order to handle pads being relinked downstream,
the flow combiner should be reset in response to RECONFIGURE events.
This ensures that a both srcpads process a chain operation before a
GST_FLOW_NOT_LINKED can be propagated upstream (which would usually stop
the pipeline).
Otherwise, in a configuration with two srcpads, only one linked at a
time, after the relink the element could chain data through the now
unlinked pad and the flow combiner would resolve as GST_FLOW_NOT_LINKED
(stopping the pipeline) just because the now linked pad has not been
chained yet to update the flow combiner.
This patch adds handling of RECONFIGURE events to qtdemux. Also, since
this event handling causes the flow combiner to be used from a thread
other than the qtdemux streaming thread, usages of the flow combiner
has been guarded by the object lock.
The key is to make sure the jitterbuffer is set to NULL *before* the
ptdemux.
The race that existed would basically happen when ptdemux had reached
READY, and the jitterbuffer would then push a buffer, triggering a new
pad with a new payloadtype being added and ghosted to the rtpbin itself.
However, the srcpad of the ptdemux would now be inactive, and all the
sticky-event pushed on it would be swallowed, not allowing any to reach
the ghost-pad. Then the buffer in-flight would come to the ghostpad,
and we would assert that a buffer arrived before the necessary
events.
By simply re-ordering the state-changes, we ensure that there will be
no buffer racing into the ptdemux while its state is being changed,
and the problem disappears completely.
Notice also that there is not point in disconnecting the signals on the
ptdemux before this point, since we need the push-thread to settle
down before we can do this in a non-racy way.
Applications might handle locations and generally configuration of the
sink by themselves instead of having splitmuxsink set the location on
the sink. Nonetheless it makes sense to increment the fragment_id that
is passed to the signal so that applications know which fragment is
requested.
Add parsed=true to output caps, as we always output
whole frames, timestamped and all. Means also that
the output can be decoded by avdec_mjpeg wihout
plugging an extra parser (which has no rank).