Stores the last result of a gst_pad_push or a pull on the GstPad and provides
a getter and a macro to access this field.
Whenever the pad is inactive it is set to FLUSHING
API: gst_pad_get_last_flow_return
https://bugzilla.gnome.org/show_bug.cgi?id=709224
Currently there is no other way to unlock a buffer pool other then
stopping it. This may have the effect of freeing all the buffers,
which is too heavy for a seek. This patch add a method to enter and
leave flushing state. As a convenience, flush_start/flush_stop
virtual are added so pool implementation can also unblock their own
internal poll atomically with the rest of the pool. This is fully
backward compatible with doing stop/start to actually flush the pool
(as being done in GstBaseSrc).
https://bugzilla.gnome.org/show_bug.cgi?id=727611
When we call gst_buffer_pool_set_config() the pool may return FALSE and
slightly change the parameters. This helper is useful to do the minial required
validation before accepting the modified configuration.
https://bugzilla.gnome.org/show_bug.cgi?id=727916
If a pool config is being configured again, check if the configuration have changed.
If not, skip that step. Finally, if the pool is active, try deactivating it.
https://bugzilla.gnome.org/show_bug.cgi?id=728268
Keep it simple. Likely also makes things easier for bindings,
and efficiency clearly has not been a consideration given how
the existing code handled these lists.
In order to be deterministic, multiple waiting GstClockIDs needs to be
released at the same time, or else one can get into the situation that
the one being released first can add itself back again before the next
one waiting is released.
Test added for new API and old tests rewritten to comply.
From the test case:
/* This test creates a multiqueue with 2 streams. One receives
* a constant flow of buffers, the other only gets one buffer, and then
* new-segment events, and returns not-linked. The multiqueue should not fill.
*/
If one of the queues goes EOS and the other returns NOT_LINKED the stream
can be considerered EOS as a NOT_LINKED means that one of the branches has no
sink downstream that will block the EOS message posting.
https://bugzilla.gnome.org/show_bug.cgi?id=725917
Tag allocated buffers with TAG_MEMORY. When they are released later,
only add them back to the pool if the tag is still there and the memory
has not been changed, otherwise throw the buffer away.
Add unit test to check various scenarios.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=724481
Store the eos event seqnum and use it when creating the
new eos event to be pushed downstream. To know if the eos
was caused by the eos events received on send_event, a
'forced_eos' flag is used to use the correct seqnum on
the event pushed downstream.
Useful if the application wants to check if the EOS message
was generated from its own pushed EOS or from another source
(stream really finished).
Also adds a test for this
https://bugzilla.gnome.org/show_bug.cgi?id=722791
Baseparse stores buffers for reverse playback to push on the next
DISCONT, the issue was that it wouldn't ever check for a discont
on passthrough mode as it skips all real parsing. This test
was create to verify this issue and prevent it from happening again
https://bugzilla.gnome.org/show_bug.cgi?id=721941
Checking twice the lower bound is great (you never know, it might change
between the two calls by someone using emacs butterfly-mode), but it's a bit
more useful to check the higher bound are also identical.
Detected by Coverity
gst_parse_launchv, gst_parse_launchv_full and gst_parse_launch_full
all return floating refs, the same as gst_parse_launch, which just
calls gst_parse_launch_full internally anyway.
Add a unit test assertion to check it's true.
Spotted by nemequ on IRC.
The check itself is racy.
(CK_FORK=no GST_CHECK=test_output_order make elements/multiqueue.forever).
The problem is indeed the test and not the actual element behaviour.
The objects to push are being pulled out of the single internal queues in the
right order and at the right time...
But between:
* the moment the global multiqueue lock is released (which was used to detect
if we should pop and push downstream the next buffer)
* and the moment it is received by the source pad (which does the check)
=> another single queue (like the unlinked pad) might pop and push a buffer
downstream
What should we do ? Putting a bigger margin of error (say 5 buffers) doesn't
help, it'll eventually fail.
I can't see how we can detect this reliably.
https://bugzilla.gnome.org/show_bug.cgi?id=708661
Wrap caps strings so that it can handle serialization and deserialization
of caps inside caps. Otherwise the values from the internal caps are parsed
as if they were from the upper one
https://bugzilla.gnome.org/show_bug.cgi?id=708772
Fixes abort when the old specifiers are used. Fix up the conversion
specifier, it would get overwritten with 'c' below to the extension
format char, which then later is unhandled, leading to the abort.
Also fix up and enable unit test for this.
https://bugzilla.gnome.org/process_bug.cgi
These account for both possible type size mismatch AND -mms-bitfields
packing. Sizes are taken from an i686-w64-mingw32-built GStreamer,
gcc 4.8.0, mingw-w64 svn-r5685.
Fixes#697551
This is equal to any other caps features but results in unfixed caps. It
would be used by elements that only look at the buffer metadata or are
currently working in passthrough mode, and as such don't care about any
specific features.
These are meant to specify features in caps that are required
for a specific structure, for example a specific memory type
or meta.
Semantically they could be though of as an extension of the media
type name of the structures and are handled exactly like that.