Instead of constantly querying upstream, just cache the last duration,
and in the unlikelyness we might have gone over query again before
deciding we are EOS.
Cut 15% cpu off matroskademux streaming thread (srsly...)
This is meant to be so (https://wiki.xiph.org/MatroskaOpus - while
it is marked as a draft, this part was confirmed to be correct on
IRC), and allows one to determine whether a demuxed stream is
multistream or not, and thus set the multistream caps field
accordingly. In turn, this means downstream does not have to guess.
https://bugzilla.gnome.org/show_bug.cgi?id=740744
It's like rendering a buffer list, just with one buffer.
Has the added advantage that if there are multiple clients
we can send the buffer to all the clients in one go.
We unlock and re-lock the client lock while emitting the
removed signal, which causes inconsistencies in the client
list vs. the client counts. Instead, remove the client from
the list already before emitting the signal and put it into
a temporary list of clients to be removed. That way things
look consistent to the streaming thread, but signal callbacks
can still do things like get stats from removed clients.
Add prototype for a render_list() function that can use a
sendmmsg-style g_socket_send_messages() function once it lands
in GLib. We can use this infrastructure to send multiple buffers
made up by multiple memories to multiple clients in one go, which
drastically reduces the number of syscalls made when sending
high-bitrate video streams.
https://bugzilla.gnome.org/show_bug.cgi?id=732152
Use the refcount for memory management and keep track
of the number of duplicate clients in a separate
variable. This will be useful later, and means we
don't have to hold the OBJECT_LOCK all the time.
https://bugzilla.gnome.org/show_bug.cgi?id=732866
Since "basetransform: Fix caps equality check" commit a7f357,
set_info() will not be called anymore if crop didn't change
the caps. This is fixed by setting "need_update" boolean when
cropping properties has been changed, and then applying these
if they where not applied before rendering the next frame. This
patch also fixed the locking, dropping un-needed custom lock,
and no holding needless lock while doing the operation as we
already hold the streaming lock.
https://bugzilla.gnome.org/show_bug.cgi?id=740787
In some cases the currently set GstVideoInfo is not interlaced, but
upstream caps are interlaced and the info is passed in the filter,
we should take that info into account and make sure that we do not
consider that case as a "pass through" case.
https://bugzilla.gnome.org/show_bug.cgi?id=741407
A race condition in the state change function may cause buffers
to be unreffed while they are still used by the streaming thread
in gst_rtp_h264_pay_send_sps_pps() resulting in a crash. Chain
up to the parent class first in the state change function to
make sure streaming has stopped and only then free those buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=741381
When dealing with fragmented files, we will get more accurate duration
information via the mfra and moof atoms.
In order for playback to not stop at the initial duration (from the
moov atom), we need to check and update the various duration variables
when we find more information.
Fixes playback of fragmented files in pull mode
Adds a new set of properties to make pushfilesrc output a TIME SEGMENT
(instead of the filesrc BYTE SEGMENT).
When time-segment is set to True the following will happen:
* Seeks are refused (data starts from the beginning of the file)
* The BYTE segment will be replaced by a TIME segment with the values
specified in the various properties
* The first outgoing buffer will have a timestamp set on it (by default
it has a value of GST_CLOCK_TIME_NONE)
When seeking or finding the previous keyframe, do
comparisons against targets and segments using composition time
to correctly decide which sample times match.
We used to setup an iterator with 1 GValue set with a NULL object
pointer which is not the normal way to do that. Instead we should make
sure that the first call to gst_iterator_next returns GST_ITERATOR_DONE.
Currently during header parsing, we scan through the entire file
and skip every moof+mdat chunk for fragmented mp4s, which makes
start-up incredibly slow. Instead, just stop at the first moof
chunk when have a moov, and start exposing the streams, so we
can go and start handling the moofs for real.
When an caps-event is received, we must immediately change the crop
to videocrop correctly changed caps-event dimension, otherwise the
videocrop will first use the previous value of the crop that when
resizing video to a smaller resolution may cause an error.
https://bugzilla.gnome.org/show_bug.cgi?id=740671
Empty segments in an edit list have a media_start time of -1,
as they don't actually play any media. Allow for that when
aligning to the reference stream in reverse play.