Add domain checks for the input values, and a variable precision
calculation that loops if necessary to ensure we never overflow
accumulators and then silently produce garbage results.
Make the (non-public) linear regression function available for
unit testing by putting it in a separate source file the test
can include. Add a unit test that the new regression function
produces sensible results for several inputs taken from real-world
captures.
Pools are allowed to change the size in order to adapt padding. So
don't check the size. Normally pool will change the size without
failing set_config(), but it they endup changing the size before
the validate method may fail on a false positive.
https://bugzilla.gnome.org/show_bug.cgi?id=741420
If a task thread is calling pause on it self and the
controlling/"main" thread stops the task, it could end in a race
where gst_task_func loops and then checks for paused after the
controlling thread just changed the task state to stopped.
Hence the task would actually call func again even though it was
both paused and stopped.
https://bugzilla.gnome.org/show_bug.cgi?id=740001
In this mode we accept previously set filter caps until
upstream renegotiates to something that is compatible
to the current filter caps.
This allows dynamic caps changes in the pipeline even
if there is a queue between any conversion element
and the capsfilter. Without this we would get not-negotiated
errors if timing is bad.
https://bugzilla.gnome.org/show_bug.cgi?id=739002
Use an array of constant strings so if arguments get
removed from it they are not considered leaked, and
valgrind is happy. Still some stuff leaking in GLib
though.
Make pipe socket non-blocking, so we don't
end up being blocked in a write on the pipe
while the src is eos and not reading data
any more, and thus we never unblock and never
notice that we're done. This would happen
quite reliably on the rpi.
Adds API to get or peek a sub-reader of a certain size from
a given byte reader. This is useful when parsing nested chunks,
one can easily get a byte reader for a sub-chunk and make
sure one never reads beyond the sub-chunk boundary.
API: gst_byte_reader_peek_sub_reader()
API: gst_byte_reader_get_sub_reader()
Adds gst_byte_reader_masked_scan_uint32_peek just like
GstAdapter has a _peek and non _peek version
Upgraded tests to check that the returned value is correct in the
_peek version
API: gst_byte_reader_masked_scan_uint32_peek
https://bugzilla.gnome.org/show_bug.cgi?id=728356
Compliments my previous patch for gst_caps_set_features, which would
previously assert and leak the old GstCapsFeatures if the caps already
had a GstCapsFeatures and you were trying to replace it with a new one.
When no data is coming from sinkpads and eos events
arrived at one of the sinkpad, funnel forwards the EOS
event to downstream. It forwards the EOS because lastsink pad
is NULL. Also the unit testcase of the funnel is not checking
the correct behavior as it should. The unit test case should
fail if one of the sink pad has already EOS present on it and
we are trying to push one more EOS.
https://bugzilla.gnome.org/show_bug.cgi?id=731716
Otherwise negative values will sets all of the 64 bits due to two's
complement's definition of negative values.
Also add a test for negative int ranges.
When a pad is added the need-parent flag is set to true, so when
they are removed the flag should be set back to false
This was preventing GstPads to be reused in elements (removed and
later re-added). A unit tests was added to verify that this is
working now.
The use case is tsdemux that has a program-number property and
allows the user to switch programs. In order to do that tsdemux
will remove the pads of the current program and add from the new
ones. The removed pads are kept in the demuxer for later if the
user selects the old program again.
Adds a utility struct that is capable of storing and aggregating flow returns
associated with pads.
This way all demuxers will have a standard function to use and have the
same expected results.
Includes tests.
https://bugzilla.gnome.org/show_bug.cgi?id=709224
Stores the last result of a gst_pad_push or a pull on the GstPad and provides
a getter and a macro to access this field.
Whenever the pad is inactive it is set to FLUSHING
API: gst_pad_get_last_flow_return
https://bugzilla.gnome.org/show_bug.cgi?id=709224
Currently there is no other way to unlock a buffer pool other then
stopping it. This may have the effect of freeing all the buffers,
which is too heavy for a seek. This patch add a method to enter and
leave flushing state. As a convenience, flush_start/flush_stop
virtual are added so pool implementation can also unblock their own
internal poll atomically with the rest of the pool. This is fully
backward compatible with doing stop/start to actually flush the pool
(as being done in GstBaseSrc).
https://bugzilla.gnome.org/show_bug.cgi?id=727611
When we call gst_buffer_pool_set_config() the pool may return FALSE and
slightly change the parameters. This helper is useful to do the minial required
validation before accepting the modified configuration.
https://bugzilla.gnome.org/show_bug.cgi?id=727916
If a pool config is being configured again, check if the configuration have changed.
If not, skip that step. Finally, if the pool is active, try deactivating it.
https://bugzilla.gnome.org/show_bug.cgi?id=728268
Keep it simple. Likely also makes things easier for bindings,
and efficiency clearly has not been a consideration given how
the existing code handled these lists.
In order to be deterministic, multiple waiting GstClockIDs needs to be
released at the same time, or else one can get into the situation that
the one being released first can add itself back again before the next
one waiting is released.
Test added for new API and old tests rewritten to comply.
From the test case:
/* This test creates a multiqueue with 2 streams. One receives
* a constant flow of buffers, the other only gets one buffer, and then
* new-segment events, and returns not-linked. The multiqueue should not fill.
*/
If one of the queues goes EOS and the other returns NOT_LINKED the stream
can be considerered EOS as a NOT_LINKED means that one of the branches has no
sink downstream that will block the EOS message posting.
https://bugzilla.gnome.org/show_bug.cgi?id=725917
Tag allocated buffers with TAG_MEMORY. When they are released later,
only add them back to the pool if the tag is still there and the memory
has not been changed, otherwise throw the buffer away.
Add unit test to check various scenarios.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=724481
Store the eos event seqnum and use it when creating the
new eos event to be pushed downstream. To know if the eos
was caused by the eos events received on send_event, a
'forced_eos' flag is used to use the correct seqnum on
the event pushed downstream.
Useful if the application wants to check if the EOS message
was generated from its own pushed EOS or from another source
(stream really finished).
Also adds a test for this
https://bugzilla.gnome.org/show_bug.cgi?id=722791
Baseparse stores buffers for reverse playback to push on the next
DISCONT, the issue was that it wouldn't ever check for a discont
on passthrough mode as it skips all real parsing. This test
was create to verify this issue and prevent it from happening again
https://bugzilla.gnome.org/show_bug.cgi?id=721941
Checking twice the lower bound is great (you never know, it might change
between the two calls by someone using emacs butterfly-mode), but it's a bit
more useful to check the higher bound are also identical.
Detected by Coverity
gst_parse_launchv, gst_parse_launchv_full and gst_parse_launch_full
all return floating refs, the same as gst_parse_launch, which just
calls gst_parse_launch_full internally anyway.
Add a unit test assertion to check it's true.
Spotted by nemequ on IRC.
The check itself is racy.
(CK_FORK=no GST_CHECK=test_output_order make elements/multiqueue.forever).
The problem is indeed the test and not the actual element behaviour.
The objects to push are being pulled out of the single internal queues in the
right order and at the right time...
But between:
* the moment the global multiqueue lock is released (which was used to detect
if we should pop and push downstream the next buffer)
* and the moment it is received by the source pad (which does the check)
=> another single queue (like the unlinked pad) might pop and push a buffer
downstream
What should we do ? Putting a bigger margin of error (say 5 buffers) doesn't
help, it'll eventually fail.
I can't see how we can detect this reliably.
https://bugzilla.gnome.org/show_bug.cgi?id=708661
Wrap caps strings so that it can handle serialization and deserialization
of caps inside caps. Otherwise the values from the internal caps are parsed
as if they were from the upper one
https://bugzilla.gnome.org/show_bug.cgi?id=708772
Bash 3's completion doesn't split words by characters in
COMP_WORDBREAKS. In particular it doesn't split at "=" signs. Now
_gst_launch_parse handles both bash 3 and 4 format of COMP_WORDS.
Note that "${cur%%=*}" means cur's value with the longest possible match
of "=*" deleted from the end; "${cur#*=}" means cur's value with the
shortest possible match of "*=" deleted from the beginning. See
http://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html
Regardless of the version of bash running the unit tests, I can test for
both behaviours because the unit test populates COMP_WORDS manually. So
this tests the bash 3 behaviour:
test_gst_inspect_completion --gst-debug-level=4
and this tests the bash 4 behaviour:
test_gst_inspect_completion --gst-debug-level = 4
Compatible with bash 3.2; doesn't require the bash-completion package at
all (though the easiest way to install this script is still to install
bash-completion, and then drop this script into /etc/bash_completion.d).
Note that bash 3 doesn't break COMP_WORDS according to characters in
COMP_WORDBREAKS, so "property=val" looks like a single word, so this
won't complete property values (on bash 3). Similarly,
"--gst-debug-level=<TAB>" won't complete properly (on bash 3), but
"--gst-debug-level <TAB>" will.
For that reason, I now offer "--gst-debug-level" etc as completions
instead of "--gst-debug-level=".
Functions "_init_completion" and "_parse_help" were provided by the
bash-completion package >= 2.0; now I roll my own equivalent of
"_parse_help", and instead of "_init_completion" I use
"_get_comp_words_by_ref" which is available from bash-completion 1.2
onwards. If the bash-completion package isn't available at all I use
bash's raw facilities, at the expense of not completing properly when
the cursor is in the middle of a word.
The builtin "compopt" doesn't exist in bash 3; those users will just
have to live with the inconvenience of "property=" completing to
"property= " with a trailing space. Property values aren't completed
properly anyway on bash 3 (see above).
"[[ -v var ]]" to test whether a variable is set, also doesn't exist in
bash 3. Neither does ";;&" to fall through in a "case" statement.
In the unit tests:
* On my system (OS X), "#!/bin/bash" is bash 3.2, whereas
"#!/usr/bin/env bash" is the 4.2 version I built myself.
* I have to initialise array variables like "expected=()", or bash 3
treats "+=" as appending to an array already populated with one empty
string.
Completes options like "--gst-debug-level" and the values of some of
those options; completes gst-launch pipeline element names, property
names, and even property values (for enum or boolean properties only).
Doesn't complete all caps specifications, nor element names specified
earlier in the pipeline with "name=...".
The GStreamer version number is hard-coded into the completion script:
This patch is off the master branch and has the version hard-coded as
"1.0"; it needs to be updated if backported to the 0.10 branch. You
could always create a "gstreamer-completion.in" that has the appropriate
version inserted by "configure", but I'd rather not do that. The
hard-coded version is consistent with the previous implementation of
gstreamer-completion, which had the registry path hard-coded as
~/.gstreamer-1.0/registry.xml.
Note that GStreamer 0.10 installs "gst-inspect" and "gst-inspect-0.10".
"gst-inspect --help" only prints 4 flags (--help, --print, --gst-mm,
gst-list-mm) whereas "gst-inspect-0.10 --help-all" prints the full list
of flags. The same applies to "gst-launch" and "gst-launch-0.10".
GStreamer 1.0 only installs "gst-inspect-1.0", not "gst-inspect".
Requires bash 4; only tested with bash 4.2. Requires "bash-completion"
(which you install with your system's package manager).
Put this in /etc/bash_completion.d/ or in `pkg-config
--variable=compatdir bash-completion`, where it will be loaded at the
beginning of every new terminal session;
or in `pgk-config --variable=completionsdir bash-completion`, renamed to
match the name of the command it completes (e.g. "gst-launch-1.0", with
an additional symlink named "gst-inspect-1.0"), where it will be
autoloaded when needed.
test-gstreamer-completion.sh is (for now) in tests/misc -- it might be
worth creating "tests/check/tools", with all the necessary automake
boilerplate, and moving test-gstreamer-completion.sh there, and have it
run automatically with "make check".
IF YOU'RE NEW TO BASH COMPLETION SCRIPTS
----------------------------------------
"complete -F _gst_launch gst-launch-1.0" means that bash will run the
function "_gst_launch" to generate possible completions for the command
"gst-launch-1.0".
"_gst_launch" must return the possible completions in the array variable
COMPREPLY. (Note on bash syntax: "V=(a b c)" assigns three elements to
the array "V").
"compgen" prints a list of possible completions to standard output. Try
it:
compgen -W "abc1 abc2 def" -- "a"
compgen -f -- "/"
The last argument is the word currently being completed; compgen uses it
to filter out the non-matching completions. We put "--" first, in case
the word currently being completed starts with "-" or "--", so that it
isn't treated as a flag to compgen.
For the documentation of COMP_WORDS, COMP_CWORD, etc see
http://www.gnu.org/software/bash/manual/html_node/Bash-Variables.html#index-COMP_005fCWORD-180
See also:
* http://www.gnu.org/software/bash/manual/html_node/Programmable-Completion.html
* http://www.gnu.org/software/bash/manual/html_node/Programmable-Completion-Builtins.html
The bash-completion package provides the helper function
"_init_completion" which populates variables "cur", "prev", and "words".
See
http://anonscm.debian.org/gitweb/?p=bash-completion/bash-completion.git;a=blob;f=bash_completion;h=870811b4;hb=HEAD#l634
Note that by default, bash appends a space to the completed word. When
the completion is "property=" we don't want a trailing space; calling
"compopt -o nospace" modifies the currently-executing completion
accordingly. See
http://www.gnu.org/software/bash/manual/html_node/Programmable-Completion-Builtins.html#index-compopt
Fixes abort when the old specifiers are used. Fix up the conversion
specifier, it would get overwritten with 'c' below to the extension
format char, which then later is unhandled, leading to the abort.
Also fix up and enable unit test for this.
https://bugzilla.gnome.org/process_bug.cgi
These account for both possible type size mismatch AND -mms-bitfields
packing. Sizes are taken from an i686-w64-mingw32-built GStreamer,
gcc 4.8.0, mingw-w64 svn-r5685.
Fixes#697551
This is equal to any other caps features but results in unfixed caps. It
would be used by elements that only look at the buffer metadata or are
currently working in passthrough mode, and as such don't care about any
specific features.
These are meant to specify features in caps that are required
for a specific structure, for example a specific memory type
or meta.
Semantically they could be though of as an extension of the media
type name of the structures and are handled exactly like that.
pop() in collected callback.
There were three threads in the test cases that hanged: the test thread and two
threads that push buffers. Each thread push one buffer on one pad. There are
two pads in the collectpads so the second buffer will trigger the
collect-callback.
This is what happens when the hang occurs:
The first thread pushes a buffer and initializes a cookie to the value of a
counter in the collectpads object and waits on a cond for the counter to change
and for someone to consume the buffer (i.e. _pop() it).
The second thread pushes a buffer and calls the collected callback, which
signals the cond that the test thread is waiting for.
The test thread pops both buffers (without holding any lock). Each call to
_pop() increases the counter broadcasts the condition that the first thread is
now waiting for. It then joins both threads (hangs).
The first thread wakes up and returns, since its buffer has been consumed.
The second thread starts executing again. When the callback, called by the
second thread, has returned it initializes a cookie to the value of a counter,
which has already prematurely been increased by the test thread when it popped
the buffers, and wait's on a cond for the counter to change and for someone to
consume the buffer (i.e. _pop() it). Since the buffer has already been poped
and the counter has already been increased it will be stuck forever.
https://bugzilla.gnome.org/show_bug.cgi?id=685555
We previously forgot to initilize the amplitde property to the default and thus it was 0.0. Therefore a default lfo controlsource returned a series of 0.0 and the test was asserting on that.
Set operations on the bitmasks don't make much sense and result
in invalid caps when used as a channel-mask. They are now handled
exactly like integers.
This functionality was not used anywhere except for tests.
https://bugzilla.gnome.org/show_bug.cgi?id=691370
The _1_0 suffixed environment variables override the
non-suffixed ones, so if we're in an environment that
sets the _1_0 suffixed ones, such as jhbuild, we need
to set those to make sure ours actually always get
used.
Implement the same behaviour as gst_pad_push_event when pushing sticky events
fails, that is don't fail immediately but fail when data flow resumes and upstream
can aggregate properly.
This fixes segment seeks with decodebin and unlinked audio or video branches.
Fixes: https://bugzilla.gnome.org/show_bug.cgi?id=687899
Fixes negotiation taking a ridiculous amount of
time (multiple 10s of seconds on a core2) when
there are duplicate entries in lists.
Could have a negative performance impact on other
scenarios because we now have to iterate the
dest list to avoid duplicates, but we don't
have a lot of lists any more these days, and
they tend to be small anyway. The negatives
are hopefully countered by the positive effects
of reducing the list length early on in the
process. And in any case, it's the right thing
to do.
Based on patch by Andre Moreira Magalhaes.
https://bugzilla.gnome.org/show_bug.cgi?id=684981
MIME-type -> media types
Fix up the manual in various places with the 1.0 way of doing things
such as probes, static elements, scheduling, ...
Add porting from 0.10 to 1.0 chapter.
Add probe example to build.
Remove some docs for remove components such as GstMixer and
GstPropertyProbe, XML...
Also add test to make sure that if a pad probe is removed while it's
callback is running, the cleanup_hook isn't called again if it
returns GST_PAD_PROBE_REMOVE
define GLIB_DISABLE_DEPRECATION_WARNINGS earlier so that it is defined before
the glib headers are loaded or else we trip over the GValueArray deprecations in
gst-inspect.c.
Our check would make sure that GLib segfaults when
someone tries to instantiate an abstract type, which
is an extremely useful thing to check for.
In newer GLibs this is fixed and we get an abort with
a g_error() now it seems, so let's just remove this
check entirely.
This is because we need to be able to signal different TOCs
to downstream elements such as muxers and the application,
and because we need to send both types as events (because
the sink should post the TOC messages for the app in the
end, just like tag messages are now posted by the sinks),
and hence need to make TOC events multi-sticky.
https://bugzilla.gnome.org/show_bug.cgi?id=678742