Large scale skip is an optimization, and thus it is safer to
stop skipping than to continue. Clear skip on segments and
discontinuities, as these are points where it is possible that
the original idea of "bytes to skip" changes.
gstbuffer.c:522:58: error: implicit conversion from enumeration type 'GstBufferFlags' to
different enumeration type 'GstBufferCopyFlags' [-Werror,-Wenum-conversion]
if (!gst_buffer_copy_into (copy, (GstBuffer *) buffer, flags, 0, -1))
~~~~~~~~~~~~~~~~~~~~ ^~~~~
gstbuffer.c:534:46: error: implicit conversion from enumeration type 'GstBufferCopyFlags' to
different enumeration type 'GstBufferFlags' [-Werror,-Wenum-conversion]
return gst_buffer_copy_with_flags (buffer, GST_BUFFER_COPY_ALL);
~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~~~~
./gstbuffer.h:433:31: note: expanded from macro 'GST_BUFFER_COPY_ALL'
...((GstBufferCopyFlags)(GST_BUFFER_COPY_METADATA | GST_BUFFER_COPY_MEMORY))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
GstNetAddress can be used to store ancillary data which was received with
or is to be sent alongside the buffer data. When used with socket sinks
and sources which understand this meta it allows sending and receiving
ancillary data such as unix credentials (See `GUnixCredentialsMessage`)
and Unix file descriptions (See `GUnixFDMessage`).
This will be useful for implementing protocols which use file-descriptor
passing in payloaders/depayloaders without having to re-implement all the
socket handling code already present in elements such as multisocketsink,
etc. This, in turn, will be useful for implementing zero-copy video IPC.
This meta uses the platform independent `GSocketControlMessage` API
provided by GLib as a part of GIO. As a result this new meta does not
require any new dependencies or any conditional compliation for
portablility, although it is unlikely to do anything useful on non-UNIX
platforms.
Don't stop the pool in set_config(). Instead, let the controlling
element manage it. Most of the time, when an active pool is being
configured is because the caps didn't change.
https://bugzilla.gnome.org/show_bug.cgi?id=745377
Allows buffers to be reclaimed when caps is to be renegotiated so
that bufferpools can be stopped. As the allocation query is
serialized all buffers have been already drained from the pipeline,
except this last_sample one.
https://bugzilla.gnome.org/show_bug.cgi?id=682770
Use gst_buffer_copy_deep() to force the copy of the underlying
memory instead of possibly doing a shallow copy of the buffer
and just referencing the memory
https://bugzilla.gnome.org/show_bug.cgi?id=745287
A variant of gst_buffer_copy that forces the underlying memory
to be copied.
This is added to avoid adding an extra reference to a GstMemory
that might belong to a bufferpool that is trying to be drained.
The use case is when the buffer copying is done to release the
old buffer and all its resources.
https://bugzilla.gnome.org/show_bug.cgi?id=745287
Shouldn't take the lock while unreferencing messages, because that may cause
more messages to be sent, which will try to take the lock and cause the app to
hang.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=728777
gst_bin_sync_children_states() will iterate over all the elements of a bin and
sync their states with the state of the bin. This is useful when adding many
elements to a bin and would otherwise have to call
gst_element_sync_state_with_parent() on each and every one of them.
https://bugzilla.gnome.org/show_bug.cgi?id=745042
Demultiplex a stream to multiple source pads based on the stream ids from the
stream-start events. This basically reverses the behaviour of funnel.
https://bugzilla.gnome.org/show_bug.cgi?id=707605
We use the segment format to detect if we run the streaming thread or not.
Without resetting we might believe we do so, although we only did in the past
and are now running in e.g. push mode.
https://bugzilla.gnome.org/show_bug.cgi?id=745073
Based on patch from Song Bing <b06498@freescale.com>
Don't just set the need_preroll flag to TRUE in all cases. When we
are already prerolled it needs to be set to FALSE and when we go to
READY we should not touch it. We should only set it to TRUE in other
cases, like what the code above does.
See https://bugzilla.gnome.org/show_bug.cgi?id=736655
Instead of always shortening the __FILE__ path, even if the
log message is not actually printed, which might happen if
the log level is activated but the category is not, only
shorten the path if we're actually going to output it and
if it looks like it needs shortening. Log handlers had no
guarantee that they would get a name instead of a path
anyway on any architecture, so it shouldn't be a problem.
https://bugzilla.gnome.org/show_bug.cgi?id=745213
It might still be waiting for a query to be handled, or the queue to become
empty again for the next item. Also if downstream returns FLUSHING, flush the
queue like we do in queue and multiqueue.
Otherwise we might wait forever for serialized queries to be handled as the
loop function is stopped and as such we will never ever dequeue the query and
handle it.
https://bugzilla.gnome.org/show_bug.cgi?id=745319
A single unlinked pad can make the latency query fail across the
pipeline, which is probably not desirable. Instead, we return a default
anything goes value.
Perhaps we should also be emitting a gst_message_new_latency() when a
PLAYING element has one of its pads linked.
https://bugzilla.gnome.org/show_bug.cgi?id=745197