We currently send data to the RTSP connection from multiple threads:
whenever a command is to be handled and whenever RTCP is generated. This
can cause data corruption or worse if both happen at the same time.
As such, protect gst_rtsp_connection_send() and gst_rtsp_connection_receive()
calls with a mutex. While this means that we hold a mutex during the IO
operation, this is not actually a problem as the IO operation can be
interrupted (gst_rtsp_connection_flush()) at any time and is blocking by
itself anyway.
The last entry will most likely get new samples added to it in "robust"
muxing mode, changing the samples_per_chunk and thus making it wrong to
keep the last two entries merged. It will run into an assertion later
when adding a new sample to the chunk.
Thanks to gdiener@cardinalpeak.com for the analysis of the bug and
proposal for a solution.
There might be other chunks after the data chunk, so clipping the chunk
size with the data size can lead to a negative number and all following
calculations go wrong and cause crashes or worse.
This was introduced in 3ac119bbe2.
https://bugzilla.gnome.org/show_bug.cgi?id=783760
They can cause us to deadlock, while we're waiting for a new frame and
upstream is waiting for the allocation query to be answered before
sending a frame
https://bugzilla.gnome.org/show_bug.cgi?id=783753
There is no difference between pushing out a buffer directly
with gst_rtp_base_depayload_push() and returning it from the
process function. The base class will just call _depayload_push()
on the returned buffer as well.
So instead of marshalling buffers through three layers and back,
just push them from one place in handle_nal() and always return
NULL from the process vfunc. This simplifies the code a little.
Also rename _push_fragmentation_unit() to _finish_fragmentation_unit()
for clarity. Push sounds like it means being pushed out, whereas
it might just be pushed into an adapter.
This change has the side-effect that multiple NALs in a single STAP
(such as SPS/PPS) may no longer be pushed out as a single buffer if
we output NALs in byte-stream format (i.e. not aggregate AUs), but
that shouldn't really make any difference to anyone.
Use the ::process_rtp_packet() vfunc to avoid mapping the
RTP buffer twice.
gst_rtp_buffer_get_payload_buffer() returns a new sub-buffer
which will always be writable, so no need to make it writable.
Every g_quark_from_static_string() is a hash table lookup serialised
on the global quark lock in GLib. Let's just look up the two quarks
we need once and cache them locally for future use. While we're at it,
add new utility functions for the two most commonly used tags
(audio + video). Make first argument a gpointer so we don't have to
cast and make the code ugly. These are used for logging purposes
only anyway.
Since the move from CVS the property name of the documentation example
has been filename instead of location. Users trying the gst-launch
command as is will get:
no property name "filename" in element
Fixing it.
If a non-reference stream is behind the reference stream by an amount of
time smaller than the alignment threshold (in nsec), it counts as being
after it.
https://bugzilla.gnome.org/show_bug.cgi?id=782563
Timecode trak is only supported for mov right now, not for mp4. That
code would otherwise create an invalid trak if the muxed video contained
timecode metadata.
https://bugzilla.gnome.org/show_bug.cgi?id=782684
We only accept new caps if they are basically the same. We don't want to
reset anything as if the caps are new, otherwise various state could get
out of sync with the current run.
We have some padding added after the initial moov, so a bigger updated
moov can be handled to some degree and is expected. Previously we just
ignored the padding and errored out in cases when the padding would've
just been enough.
This sets up a moov with the correct sample positions beforehand and
only works with constant framerate, I-frame only streams.
Currently only support for ProRes and raw audio is implemented but
adding new codecs is just a matter of defining appropriate maximum frame
sizes.
https://bugzilla.gnome.org/show_bug.cgi?id=781447
When muxing raw audio, we have no way of storing timestamps but are just
storing a continuous stream of audio samples. If the difference between
the expected and the real timestamp becomes to big, we should error out
instead of silently creating files with wrong A/V sync.
https://bugzilla.gnome.org/show_bug.cgi?id=780679
Re-arrange order of index entry struct members to avoid padding
bytes in the middle of the struct, thus potentially reducing the
overall size of the struct and reducing memory used by the index.
On Linux x86_64 the size goes down from 32 bytes to 24 bytes for
each index entry.
If no clock was provided directly by rtspsrc. This behaviour was removed
by f8013487c9 and results in rtspsrc not
providing the system clock via the rtpjitterbuffer.
As a result, if another element like an audio sink, provides a clock,
the pipeline would select that (when going to PAUSED/PLAYING again later).
Audio clocks usually don't progress in PAUSED, and thus our live source
won't be able to use the clock to produce data, making the sink never
preroll and everything is stuck.
... unless the muxer uses the same audio pad template name as
splitmuxsink. We can't request a pad called "audio_0" on a muxer that
wants pads to be "sink_%d".
In push mode we process as much as possible in the adapter. When we receive
a DISCONT buffer which we can't match to an actual sample (based on the existing
sample table) and there is still data remaining in the incoming adapter,there is
one of two cases happening:
1) We are doing reverse playback, in which case we should flush out all pending
data
2) We have leftover data from the previous incoming buffer... which we can't do
anything about.
For the second case, make sure we flush out the remaining data so that we can start
parsing again from scratch.
https://bugzilla.gnome.org/show_bug.cgi?id=781319