* Avoid the computation completely if we know we don't need it (not in
sync time mode)
* Make sure we don't override highest time with GST_CLOCK_TIME_NONE on
unlinked pads
* Ensure the high_time gets properly updated if all pads are not linked
* Fix the comparision in the loop whether the target high time is the same
as the current time
* Split wake_up_next_non_linked method to avoid useless calculation
https://bugzilla.gnome.org/show_bug.cgi?id=757353
When preparing a buffering message, don't report 0% if there
is any bytes left in the queue at all. We still have something
to push, so don't tell the app to start buffering - maybe
we'll get more data before actually running dry.
This is useful for feature that are produced after probing a specific
node. You want to reload this plugin if the specific node(s) have been
removed, added, or reloaded.
https://bugzilla.gnome.org/show_bug.cgi?id=758080
In plugin is responsible for calculating a hash of the dependencies
in order to determine if the cache should be invalidated or not.
Currently, the hash combining method removes a bit of the original
have before combining with an addition. As we use 32bits for our hash
and shift 1 bit for each file and directory, that resulting hash only
account for the last 32 files. And is more affected by the last file.
Rotating technique (shifting, and adding back the ending bit), can be
use to make the addition non-commutative. In a way that different order
gives different hashes. In this case, I don't preserve this behaviour
because the order in which the files are provided by the OS is
irrelevant.
In most cases, the XOR operation is used to combine hashes. In this
code we use the addition. I decided to preserve the addition because
we make use of non-random hash ((guint) -1) in the algorithm for
matching files that are not really part of the hash (symlinks, special
files). Doing successive XOR on this value, will simply switch from
full ones, to full zero. The XOR used with whitelist has been preserved
as it's based on a fairly randomized hash (g_str_hash).
https://bugzilla.gnome.org/show_bug.cgi?id=758078
This reverts commit 2c475a0355.
This causes issues with h264parse. It breaks timestamps as
there are headers in the middle of the stream and this patch
makes the timestamps for those differ from the ones that
are adjusted, creating a discontinuity and leading to sync
issues.
Otherwise the buffer was left with the original values and later would
be compared with other buffers that were converted to runninn time,
leading to bad interleaving of multiple streams.
https://bugzilla.gnome.org/show_bug.cgi?id=757961
baseparse tries to preserve timestamps from upstream if
it is running on a time segment and write that to
output buffers. It assumes the first DTS is going to be
segment.start and sets that to the first buffers. In case
the buffer is a header buffer, it had no timestamps and
will have only the DTS set due to this mechanism.
This patch prevents this by skipping this behavior for
header buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=757961
The install hook needs to be a install-data-hook not an install-exec-hook as the
helpers are installed into helperdir which is considered data (only path
variables with "exec" in are considered executables).
The explicit dependency on install-helpersPROGRAMS was an attempt at solving
this, but this causes occasional races where install-helpersPROGRAMS can run
twice in parallel (once via install-all, once via the hook's dependency).
https://bugzilla.gnome.org/show_bug.cgi?id=758029
On iOS/OSX g_get_current_time was used by default. However, mach_time is
the preferred high-resolution monotonic clock to be used on Apple
platforms.
https://bugzilla.gnome.org/show_bug.cgi?id=758012
Helps catching when a state change is starting and ending.
It is also possible to track the end of state changes by checking the
async-done or state-change messages.
This is particularly important for elements that do async state changes.
Adds 3 new tests for testing accept-caps behavior with
proxy-caps pads.
1) A scenario where there is no proxy. The caps should be compared to the
template caps of the pad
2) A scenario where there is a compatible pad. The caps should be compared
to the proxied pad caps (and also with the template)
3) A scenario where there is an incompatible proxy pad. No caps should be
possible at all.
https://bugzilla.gnome.org/show_bug.cgi?id=754112
Validate that the proxy pad indeed accepts the caps by also
comparing with the pad template caps, otherwise when the pad
had no internally linked pads it would always return true.
https://bugzilla.gnome.org/show_bug.cgi?id=754112
Sometimes filesink cleanup during stop may fail due to fclose error.
In this case object left partial cleanup with no file opened
but still holding old file descriptor.
It's not possible to change location property in a such state,
so next start will cause old file overwrite if 'append' does not set.
According to man page and POSIX standard about fclose behavior(extract):
------------------------------------------------------------------------
The fclose() function shall cause the stream pointed to by stream
to be flushed and the associated file to be closed.
...
Whether or not the call succeeds, the stream shall be disassociated
from the file and any buffer set by the setbuf() or setvbuf()
function shall be disassociated from the stream.
...
The fclose() function shall perform the equivalent of a close()
on the file descriptor that is associated with the stream
pointed to by stream.
After the call to fclose(), any use of stream results
in undefined behavior.
------------------------------------------------------------------------
So file is in 'closed' state no matter if fclose succeed or not.
And cleanup could be continued.
https://bugzilla.gnome.org/show_bug.cgi?id=757596
Instead of re-sending sticky events over and over to a not-linked
pad, mark them as sent the first time. If the not-linked came from
downstream, it already received the events. If the pad is actually
not-linked, the sticky events will be rescheduled when the
pad is linked anyway.
There is a similar explanation in gst_caps_make_writable, but the existing
documentation can be misleading since it does not define what 'is already
writable' means.
Also note when this function is meant to be used.
The input of queue/queue2 might have DTS set, in which cas we want
to take that into account (instead of the PTS) to calculate position
and queue levels.
https://bugzilla.gnome.org/show_bug.cgi?id=756507
In order to accurately determine the amount (in time) of data
travelling in queues, we should use an increasing value.
If buffers are encoded and potentially reordered, we should be
using their DTS (increasing) and not PTS (reordered)
https://bugzilla.gnome.org/show_bug.cgi?id=756507
API: GST_BUFFER_DTS_OR_PTS
Many scenarios/elements require dealing with streams of buffers that
might have DTS set (i.e. encoded data, potentially reordered)
To simplify getting the increasing "timestamp" of those buffers, create
a macro that will return the DTS if valid, and if not the PTS