gst_debug_unset_threshold_for_name() used to go into an
infinite loop when there was more than one category in
the list. This test captures the problem by failing
via timeout.
https://bugzilla.gnome.org/show_bug.cgi?id=748321
In order to support some types of protected streams (such as those
protected using DASH Common Encryption) some per-buffer information
needs to be passed between elements.
This commit adds a GstMeta type called GstProtectionMeta that allows
protection specific information to be added to a GstBuffer. An example
of its usage is qtdemux providing information to each output sample
that enables a downstream element to decrypt it.
This commit adds a utility function to select a supported protection
system from the installed Decryption elements found in the registry.
The gst_protection_select_system function that takes an array of
identifiers and searches the registry for a element of klass Decryptor that
supports one or more of the supplied identifiers. If multiple elements
are found, the one with the highest rank is selected.
This commit adds a unit test for the gst_protection_select_system
function that adds a fake Decryptor element to the registry and then
checks that it can correctly be selected by the utility function.
This commit adds a unit test for GstProtectionMeta that creates
GstProtectionMeta and adds & removes it from a buffer and performs some
simple reference count checks.
API: gst_buffer_add_protection_meta()
API: gst_buffer_get_protection_meta()
API: gst_protection_select_system()
API: gst_protection_meta_api_get_type()
API: gst_protection_meta_get_info()
https://bugzilla.gnome.org/show_bug.cgi?id=705991
In order for a decrypter element to decrypt media protected using a
specific protection system, it first needs all the protection system
specific information necessary (E.g. information on how to acquire
the decryption keys) for that stream.
The GST_EVENT_PROTECTION defined in this commit enables this information
to be passed from elements that extract it (e.g. qtdemux, dashdemux) to
elements that use it (E.g. a decrypter element).
API: GST_EVENT_PROTECTION
API: gst_event_new_protection()
API: gst_event_parse_protection()
https://bugzilla.gnome.org/show_bug.cgi?id=705991
Only save the messages we're interested in and expecting.
When run with *:9 we might get additional TRACE level
messages from other categories and then we don't end up
with the number of messages we expect.
This tests add an idle probe on an idle pad from a separate thread
so that the callback is called immediatelly. This callback will sit
still and then we try to push a buffer on this same pad. It verifies
that the idle probe blocks data passing
https://bugzilla.gnome.org/show_bug.cgi?id=747852
When a bin changes states upwards, and a child fails to change,
any child that was already switched will not be reset to its
original state, leaving its state inconsistent with the bin,
which does not change state due to the failure.
If the state change was from NULL to READY, it means that deleting
this bin will cause those children to be deleted while not in
NULL state, which is a Bad Thing. For other upward changes, it
is less of a problem, as a subsequent switch back to NULL will
cause an actual downwards change on those inconsistent elements,
albeit from the "wrong" state.
We now reset state to the original one when a child fails.
Includes unit test.
https://bugzilla.gnome.org/show_bug.cgi?id=747610
Use case: we want to block the source pad of a leaky queue and
drop the buffer that causes the block. If we return PROBE_DROP
then the buffer gets dropped, but we get called again. If we
return PROBE_OK we can't easily drop the buffer. If we just
replace the item into the GstPadProbeInfo structure with NULL,
GStreamer will push a NULL buffer to the next element when we
unblock the pad probe. This patch ensures it doesn't do that.
https://bugzilla.gnome.org/show_bug.cgi?id=734342
Update test_seeking testcase to verify the render and render_list
virtual method handle buffers and buffer list containing multiple
memory blocks correctly.
https://bugzilla.gnome.org/show_bug.cgi?id=747223
GstFileSink implements the render_list virtual method to render
a list of buffers. Update the test_seeking test case to also
check the render_list method implementation.
https://bugzilla.gnome.org/show_bug.cgi?id=747100
Don't unwrap strings that start but don't finish with a double quote. If a
string is delimited by two quotes we unescape them and any special characters
in the middle (like \" or \\). If the first character or the last character
aren't a quote we assume it's part of an unescaped string.
Moved some deserialize_string unit tests because we don't try to unwrap strings
missing that second quote anymore.
https://bugzilla.gnome.org/show_bug.cgi?id=688625
Otherwise baseparse will consider empty streams to be an error while
an empty stream is a valid scenario. With this patch, errors would
only be emitted if the parser received data but wasn't able to
produce any output from it.
This change is only for push-mode operation as in pull mode an
empty file can be considered an error for the one driving the
pipeline
Includes a unit test for it
https://bugzilla.gnome.org/show_bug.cgi?id=733171
This property avoids not linked error when all the pads are unlinked
or when there are no source pads. This is useful in dynamic pipelines
where it can happen that for a short time there are no pads at all or
all downstream pads are not linked yet.
https://bugzilla.gnome.org/show_bug.cgi?id=746436
3 new tests:
1) Tests that a stream that is empty (just an EOS event)
on inactive pad doesn't get through and tamper
with the active pad that still has data
2) Tests that a stream that is shorter than the active one
(pushes EOS earlier) doesn't has its EOS pushed
3) Tests that switching to an inactive stream that has received
EOS will make input-selector push EOS
https://bugzilla.gnome.org/show_bug.cgi?id=746518
Do not do any checks for the start/stop in the new
gst_segment_to_running_time_full() method, we can let this be done by
the more capable gst_segment_clip() method. This allows us to remove the
enum of results and only return the sign of the calculated running-time.
We need to put the old clipping checks in the old
gst_segment_to_running_time() still because they work slightly
differently than the _clip methods.
See https://bugzilla.gnome.org/show_bug.cgi?id=740575
Add a clip argument to gst_segment_to_running_time_full() to disable
the checks against the segment boundaries. This makes it possible to
generate an extrapolated running-time for timestamps outside of the
segment.
See https://bugzilla.gnome.org/show_bug.cgi?id=740575
Add a helper method to get a running-time with a little more features
such as detecting if the value was before or after the segment and
negative running-time.
API: gst_segment_to_running_time_full()
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=740575
The position in the segment is relative to the start but the offset
isn't, so subtract the start from the position when setting the offset.
Add unit test for this as well.
Don't stop the pool in set_config(). Instead, let the controlling
element manage it. Most of the time, when an active pool is being
configured is because the caps didn't change.
https://bugzilla.gnome.org/show_bug.cgi?id=745377
A variant of gst_buffer_copy that forces the underlying memory
to be copied.
This is added to avoid adding an extra reference to a GstMemory
that might belong to a bufferpool that is trying to be drained.
The use case is when the buffer copying is done to release the
old buffer and all its resources.
https://bugzilla.gnome.org/show_bug.cgi?id=745287
Demultiplex a stream to multiple source pads based on the stream ids from the
stream-start events. This basically reverses the behaviour of funnel.
https://bugzilla.gnome.org/show_bug.cgi?id=707605
This reverts commit 1911554cff.
This breaks the functionality of GST_PAD_FLAG_NEED_PARENT, the reason for this
flag is that if a pad is removed from a running element, you don't want
functions (such as chain or event) to be called on the pad without a parent set.
This can happen if you remove a request or sometimes pad from a running element.
I don't see the code that caused this in tsdemux, but if it needs to unset
the flag on remove, it should do it itself and then make sure that the parent
exists in any pad function.
Add domain checks for the input values, and a variable precision
calculation that loops if necessary to ensure we never overflow
accumulators and then silently produce garbage results.
Make the (non-public) linear regression function available for
unit testing by putting it in a separate source file the test
can include. Add a unit test that the new regression function
produces sensible results for several inputs taken from real-world
captures.
Pools are allowed to change the size in order to adapt padding. So
don't check the size. Normally pool will change the size without
failing set_config(), but it they endup changing the size before
the validate method may fail on a false positive.
https://bugzilla.gnome.org/show_bug.cgi?id=741420
If a task thread is calling pause on it self and the
controlling/"main" thread stops the task, it could end in a race
where gst_task_func loops and then checks for paused after the
controlling thread just changed the task state to stopped.
Hence the task would actually call func again even though it was
both paused and stopped.
https://bugzilla.gnome.org/show_bug.cgi?id=740001
In this mode we accept previously set filter caps until
upstream renegotiates to something that is compatible
to the current filter caps.
This allows dynamic caps changes in the pipeline even
if there is a queue between any conversion element
and the capsfilter. Without this we would get not-negotiated
errors if timing is bad.
https://bugzilla.gnome.org/show_bug.cgi?id=739002
Use an array of constant strings so if arguments get
removed from it they are not considered leaked, and
valgrind is happy. Still some stuff leaking in GLib
though.
Make pipe socket non-blocking, so we don't
end up being blocked in a write on the pipe
while the src is eos and not reading data
any more, and thus we never unblock and never
notice that we're done. This would happen
quite reliably on the rpi.
Adds API to get or peek a sub-reader of a certain size from
a given byte reader. This is useful when parsing nested chunks,
one can easily get a byte reader for a sub-chunk and make
sure one never reads beyond the sub-chunk boundary.
API: gst_byte_reader_peek_sub_reader()
API: gst_byte_reader_get_sub_reader()
Adds gst_byte_reader_masked_scan_uint32_peek just like
GstAdapter has a _peek and non _peek version
Upgraded tests to check that the returned value is correct in the
_peek version
API: gst_byte_reader_masked_scan_uint32_peek
https://bugzilla.gnome.org/show_bug.cgi?id=728356
Compliments my previous patch for gst_caps_set_features, which would
previously assert and leak the old GstCapsFeatures if the caps already
had a GstCapsFeatures and you were trying to replace it with a new one.
When no data is coming from sinkpads and eos events
arrived at one of the sinkpad, funnel forwards the EOS
event to downstream. It forwards the EOS because lastsink pad
is NULL. Also the unit testcase of the funnel is not checking
the correct behavior as it should. The unit test case should
fail if one of the sink pad has already EOS present on it and
we are trying to push one more EOS.
https://bugzilla.gnome.org/show_bug.cgi?id=731716
Otherwise negative values will sets all of the 64 bits due to two's
complement's definition of negative values.
Also add a test for negative int ranges.
When a pad is added the need-parent flag is set to true, so when
they are removed the flag should be set back to false
This was preventing GstPads to be reused in elements (removed and
later re-added). A unit tests was added to verify that this is
working now.
The use case is tsdemux that has a program-number property and
allows the user to switch programs. In order to do that tsdemux
will remove the pads of the current program and add from the new
ones. The removed pads are kept in the demuxer for later if the
user selects the old program again.
Adds a utility struct that is capable of storing and aggregating flow returns
associated with pads.
This way all demuxers will have a standard function to use and have the
same expected results.
Includes tests.
https://bugzilla.gnome.org/show_bug.cgi?id=709224
Stores the last result of a gst_pad_push or a pull on the GstPad and provides
a getter and a macro to access this field.
Whenever the pad is inactive it is set to FLUSHING
API: gst_pad_get_last_flow_return
https://bugzilla.gnome.org/show_bug.cgi?id=709224
Currently there is no other way to unlock a buffer pool other then
stopping it. This may have the effect of freeing all the buffers,
which is too heavy for a seek. This patch add a method to enter and
leave flushing state. As a convenience, flush_start/flush_stop
virtual are added so pool implementation can also unblock their own
internal poll atomically with the rest of the pool. This is fully
backward compatible with doing stop/start to actually flush the pool
(as being done in GstBaseSrc).
https://bugzilla.gnome.org/show_bug.cgi?id=727611
When we call gst_buffer_pool_set_config() the pool may return FALSE and
slightly change the parameters. This helper is useful to do the minial required
validation before accepting the modified configuration.
https://bugzilla.gnome.org/show_bug.cgi?id=727916
If a pool config is being configured again, check if the configuration have changed.
If not, skip that step. Finally, if the pool is active, try deactivating it.
https://bugzilla.gnome.org/show_bug.cgi?id=728268
Keep it simple. Likely also makes things easier for bindings,
and efficiency clearly has not been a consideration given how
the existing code handled these lists.
In order to be deterministic, multiple waiting GstClockIDs needs to be
released at the same time, or else one can get into the situation that
the one being released first can add itself back again before the next
one waiting is released.
Test added for new API and old tests rewritten to comply.
From the test case:
/* This test creates a multiqueue with 2 streams. One receives
* a constant flow of buffers, the other only gets one buffer, and then
* new-segment events, and returns not-linked. The multiqueue should not fill.
*/
If one of the queues goes EOS and the other returns NOT_LINKED the stream
can be considerered EOS as a NOT_LINKED means that one of the branches has no
sink downstream that will block the EOS message posting.
https://bugzilla.gnome.org/show_bug.cgi?id=725917
Tag allocated buffers with TAG_MEMORY. When they are released later,
only add them back to the pool if the tag is still there and the memory
has not been changed, otherwise throw the buffer away.
Add unit test to check various scenarios.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=724481
Store the eos event seqnum and use it when creating the
new eos event to be pushed downstream. To know if the eos
was caused by the eos events received on send_event, a
'forced_eos' flag is used to use the correct seqnum on
the event pushed downstream.
Useful if the application wants to check if the EOS message
was generated from its own pushed EOS or from another source
(stream really finished).
Also adds a test for this
https://bugzilla.gnome.org/show_bug.cgi?id=722791
Baseparse stores buffers for reverse playback to push on the next
DISCONT, the issue was that it wouldn't ever check for a discont
on passthrough mode as it skips all real parsing. This test
was create to verify this issue and prevent it from happening again
https://bugzilla.gnome.org/show_bug.cgi?id=721941
Checking twice the lower bound is great (you never know, it might change
between the two calls by someone using emacs butterfly-mode), but it's a bit
more useful to check the higher bound are also identical.
Detected by Coverity
gst_parse_launchv, gst_parse_launchv_full and gst_parse_launch_full
all return floating refs, the same as gst_parse_launch, which just
calls gst_parse_launch_full internally anyway.
Add a unit test assertion to check it's true.
Spotted by nemequ on IRC.
The check itself is racy.
(CK_FORK=no GST_CHECK=test_output_order make elements/multiqueue.forever).
The problem is indeed the test and not the actual element behaviour.
The objects to push are being pulled out of the single internal queues in the
right order and at the right time...
But between:
* the moment the global multiqueue lock is released (which was used to detect
if we should pop and push downstream the next buffer)
* and the moment it is received by the source pad (which does the check)
=> another single queue (like the unlinked pad) might pop and push a buffer
downstream
What should we do ? Putting a bigger margin of error (say 5 buffers) doesn't
help, it'll eventually fail.
I can't see how we can detect this reliably.
https://bugzilla.gnome.org/show_bug.cgi?id=708661
Wrap caps strings so that it can handle serialization and deserialization
of caps inside caps. Otherwise the values from the internal caps are parsed
as if they were from the upper one
https://bugzilla.gnome.org/show_bug.cgi?id=708772
Bash 3's completion doesn't split words by characters in
COMP_WORDBREAKS. In particular it doesn't split at "=" signs. Now
_gst_launch_parse handles both bash 3 and 4 format of COMP_WORDS.
Note that "${cur%%=*}" means cur's value with the longest possible match
of "=*" deleted from the end; "${cur#*=}" means cur's value with the
shortest possible match of "*=" deleted from the beginning. See
http://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html
Regardless of the version of bash running the unit tests, I can test for
both behaviours because the unit test populates COMP_WORDS manually. So
this tests the bash 3 behaviour:
test_gst_inspect_completion --gst-debug-level=4
and this tests the bash 4 behaviour:
test_gst_inspect_completion --gst-debug-level = 4
Compatible with bash 3.2; doesn't require the bash-completion package at
all (though the easiest way to install this script is still to install
bash-completion, and then drop this script into /etc/bash_completion.d).
Note that bash 3 doesn't break COMP_WORDS according to characters in
COMP_WORDBREAKS, so "property=val" looks like a single word, so this
won't complete property values (on bash 3). Similarly,
"--gst-debug-level=<TAB>" won't complete properly (on bash 3), but
"--gst-debug-level <TAB>" will.
For that reason, I now offer "--gst-debug-level" etc as completions
instead of "--gst-debug-level=".
Functions "_init_completion" and "_parse_help" were provided by the
bash-completion package >= 2.0; now I roll my own equivalent of
"_parse_help", and instead of "_init_completion" I use
"_get_comp_words_by_ref" which is available from bash-completion 1.2
onwards. If the bash-completion package isn't available at all I use
bash's raw facilities, at the expense of not completing properly when
the cursor is in the middle of a word.
The builtin "compopt" doesn't exist in bash 3; those users will just
have to live with the inconvenience of "property=" completing to
"property= " with a trailing space. Property values aren't completed
properly anyway on bash 3 (see above).
"[[ -v var ]]" to test whether a variable is set, also doesn't exist in
bash 3. Neither does ";;&" to fall through in a "case" statement.
In the unit tests:
* On my system (OS X), "#!/bin/bash" is bash 3.2, whereas
"#!/usr/bin/env bash" is the 4.2 version I built myself.
* I have to initialise array variables like "expected=()", or bash 3
treats "+=" as appending to an array already populated with one empty
string.
Completes options like "--gst-debug-level" and the values of some of
those options; completes gst-launch pipeline element names, property
names, and even property values (for enum or boolean properties only).
Doesn't complete all caps specifications, nor element names specified
earlier in the pipeline with "name=...".
The GStreamer version number is hard-coded into the completion script:
This patch is off the master branch and has the version hard-coded as
"1.0"; it needs to be updated if backported to the 0.10 branch. You
could always create a "gstreamer-completion.in" that has the appropriate
version inserted by "configure", but I'd rather not do that. The
hard-coded version is consistent with the previous implementation of
gstreamer-completion, which had the registry path hard-coded as
~/.gstreamer-1.0/registry.xml.
Note that GStreamer 0.10 installs "gst-inspect" and "gst-inspect-0.10".
"gst-inspect --help" only prints 4 flags (--help, --print, --gst-mm,
gst-list-mm) whereas "gst-inspect-0.10 --help-all" prints the full list
of flags. The same applies to "gst-launch" and "gst-launch-0.10".
GStreamer 1.0 only installs "gst-inspect-1.0", not "gst-inspect".
Requires bash 4; only tested with bash 4.2. Requires "bash-completion"
(which you install with your system's package manager).
Put this in /etc/bash_completion.d/ or in `pkg-config
--variable=compatdir bash-completion`, where it will be loaded at the
beginning of every new terminal session;
or in `pgk-config --variable=completionsdir bash-completion`, renamed to
match the name of the command it completes (e.g. "gst-launch-1.0", with
an additional symlink named "gst-inspect-1.0"), where it will be
autoloaded when needed.
test-gstreamer-completion.sh is (for now) in tests/misc -- it might be
worth creating "tests/check/tools", with all the necessary automake
boilerplate, and moving test-gstreamer-completion.sh there, and have it
run automatically with "make check".
IF YOU'RE NEW TO BASH COMPLETION SCRIPTS
----------------------------------------
"complete -F _gst_launch gst-launch-1.0" means that bash will run the
function "_gst_launch" to generate possible completions for the command
"gst-launch-1.0".
"_gst_launch" must return the possible completions in the array variable
COMPREPLY. (Note on bash syntax: "V=(a b c)" assigns three elements to
the array "V").
"compgen" prints a list of possible completions to standard output. Try
it:
compgen -W "abc1 abc2 def" -- "a"
compgen -f -- "/"
The last argument is the word currently being completed; compgen uses it
to filter out the non-matching completions. We put "--" first, in case
the word currently being completed starts with "-" or "--", so that it
isn't treated as a flag to compgen.
For the documentation of COMP_WORDS, COMP_CWORD, etc see
http://www.gnu.org/software/bash/manual/html_node/Bash-Variables.html#index-COMP_005fCWORD-180
See also:
* http://www.gnu.org/software/bash/manual/html_node/Programmable-Completion.html
* http://www.gnu.org/software/bash/manual/html_node/Programmable-Completion-Builtins.html
The bash-completion package provides the helper function
"_init_completion" which populates variables "cur", "prev", and "words".
See
http://anonscm.debian.org/gitweb/?p=bash-completion/bash-completion.git;a=blob;f=bash_completion;h=870811b4;hb=HEAD#l634
Note that by default, bash appends a space to the completed word. When
the completion is "property=" we don't want a trailing space; calling
"compopt -o nospace" modifies the currently-executing completion
accordingly. See
http://www.gnu.org/software/bash/manual/html_node/Programmable-Completion-Builtins.html#index-compopt
Fixes abort when the old specifiers are used. Fix up the conversion
specifier, it would get overwritten with 'c' below to the extension
format char, which then later is unhandled, leading to the abort.
Also fix up and enable unit test for this.
https://bugzilla.gnome.org/process_bug.cgi
These account for both possible type size mismatch AND -mms-bitfields
packing. Sizes are taken from an i686-w64-mingw32-built GStreamer,
gcc 4.8.0, mingw-w64 svn-r5685.
Fixes#697551
This is equal to any other caps features but results in unfixed caps. It
would be used by elements that only look at the buffer metadata or are
currently working in passthrough mode, and as such don't care about any
specific features.
These are meant to specify features in caps that are required
for a specific structure, for example a specific memory type
or meta.
Semantically they could be though of as an extension of the media
type name of the structures and are handled exactly like that.
pop() in collected callback.
There were three threads in the test cases that hanged: the test thread and two
threads that push buffers. Each thread push one buffer on one pad. There are
two pads in the collectpads so the second buffer will trigger the
collect-callback.
This is what happens when the hang occurs:
The first thread pushes a buffer and initializes a cookie to the value of a
counter in the collectpads object and waits on a cond for the counter to change
and for someone to consume the buffer (i.e. _pop() it).
The second thread pushes a buffer and calls the collected callback, which
signals the cond that the test thread is waiting for.
The test thread pops both buffers (without holding any lock). Each call to
_pop() increases the counter broadcasts the condition that the first thread is
now waiting for. It then joins both threads (hangs).
The first thread wakes up and returns, since its buffer has been consumed.
The second thread starts executing again. When the callback, called by the
second thread, has returned it initializes a cookie to the value of a counter,
which has already prematurely been increased by the test thread when it popped
the buffers, and wait's on a cond for the counter to change and for someone to
consume the buffer (i.e. _pop() it). Since the buffer has already been poped
and the counter has already been increased it will be stuck forever.
https://bugzilla.gnome.org/show_bug.cgi?id=685555
We previously forgot to initilize the amplitde property to the default and thus it was 0.0. Therefore a default lfo controlsource returned a series of 0.0 and the test was asserting on that.
Set operations on the bitmasks don't make much sense and result
in invalid caps when used as a channel-mask. They are now handled
exactly like integers.
This functionality was not used anywhere except for tests.
https://bugzilla.gnome.org/show_bug.cgi?id=691370
The _1_0 suffixed environment variables override the
non-suffixed ones, so if we're in an environment that
sets the _1_0 suffixed ones, such as jhbuild, we need
to set those to make sure ours actually always get
used.
Implement the same behaviour as gst_pad_push_event when pushing sticky events
fails, that is don't fail immediately but fail when data flow resumes and upstream
can aggregate properly.
This fixes segment seeks with decodebin and unlinked audio or video branches.
Fixes: https://bugzilla.gnome.org/show_bug.cgi?id=687899