They can fail for various reasons.
For non-fatal cases (such as the dump feature of identiy and fakesink),
we just silently skip it.
For other cases post an error message.
https://bugzilla.gnome.org/show_bug.cgi?id=728326
Remove calls to gst_pad_has_current_caps() which then go on to call
gst_pad_get_current_caps() as the caps can go to NULL in between. Instead just
use gst_pad_get_current_caps() and check for NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=759539
Change the gst_tracer_record_new() api to take the parameters the make the
spec structure directly. This allows us to own the top-level structure and
also collect the args so that we can take ownership of the sub-structures.
https://bugzilla.gnome.org/show_bug.cgi?id=760821
The use-tags-bitrate property makes queue2 look at
tag events in the stream and extract a bitrate for the
stream to use when calculating a duration for buffers
that don't have one explicitly set.
This lets queue2 sensibly buffer to a time threshold
for any bytestream for which the general bitrate is known.
Only hide GstTracer and GstTracerRecord API behind GST_USE_UNSTABLE_API,
but don't spew any warnings, otherwise everyone has to define this
to avoid compiler warnings.
This reverts parts of commit 89ee5d948d.
We use this class to register tracer log entry metadata and build a log
template. With the log template we can serialize log data very efficiently.
This also simplifies the logging code, since that is now a simple varargs
function that is not exposing the implementation details.
Add docs for the new class and basic tests.
Remove the previous log handler.
Fixes#760267
rename gst-launch --> gst-launch-1.0
replace old elements with new elements(ffmpegcolorspace -> videoconvert, ffenc_** -> avenc_**)
fix caps in examples
https://bugzilla.gnome.org/show_bug.cgi?id=759432
segment.position is meant for internal usage only, but the various
GST_EVENT_SEGMENT creationg/parsing functions won't clear that field.
Use the appropriate segment boundary as an initial value instead
When synchronizing the output by time, there are some use-cases (like
allowing gapless playback downstream) where we want the unlinked streams
to stay slightly behind the linked streams.
The "unlinked-cache-time" property allows the user to specify by how
much time the unlinked streams should wait before pushing again.
Multiqueue should only be used to cope with:
* decoupling upstream and dowstream threading (i.e. having separate threads
for elementary streams).
* Ensuring individual queues have enough space to cope with upstream interleave
(distance in stream time between co-located samples). This is to guarantee
that we have enough room in each individual queues to provide new data in
each, without being blocked.
* Limit the queue sizes to that interleave distance (and an extra minimal
buffering size). This is to ensure we don't consume too much memory.
Based on that, multiqueue now continuously calculates the input interleave
(per incoming streaming thread). Based on that, it calculates a target
interleave (currently 1.5 x real_interleave + 250ms padding).
If the target interleave is greater than the current max_size.time, it will
update it accordingly (to allow enough margin to not block).
If the target interleave goes down by more than 50%, we re-adjust it once
we know we have gone past a safe distance (2 x current max_size.time).
This mode can only be used for incoming streams that are guaranteed to be
properly timestamped.
Furthermore, we ignore sparse streams when calculating interleave and maximum
size of queues.
For the simplest of use-cases (single stream), multiqueue acts as a single
queue with a time limit of 250ms.
If there are multiple inputs, but each come from a different streaming thread,
the maximum time limit will also end up being 250ms.
On regular files (more than one input stream from the same upstream streaming
thread), it can reduce the total memory used as much as 10x, ending up with
max_size.time around 500ms.
Due to the adaptive nature, it can also cope with changing interleave (which
can happen commonly on some files at startup/pre-roll time)
This will mean a much lower delay before a subtitles track changes take
effect. Also avoids excessive memory usage in many cases.
This will also consider sparse streams as (individually) never full, so
as to avoid blocking all playback due to one sparse stream.
https://bugzilla.gnome.org/show_bug.cgi?id=600648
* Avoid the computation completely if we know we don't need it (not in
sync time mode)
* Make sure we don't override highest time with GST_CLOCK_TIME_NONE on
unlinked pads
* Ensure the high_time gets properly updated if all pads are not linked
* Fix the comparision in the loop whether the target high time is the same
as the current time
* Split wake_up_next_non_linked method to avoid useless calculation
https://bugzilla.gnome.org/show_bug.cgi?id=757353
When preparing a buffering message, don't report 0% if there
is any bytes left in the queue at all. We still have something
to push, so don't tell the app to start buffering - maybe
we'll get more data before actually running dry.
Sometimes filesink cleanup during stop may fail due to fclose error.
In this case object left partial cleanup with no file opened
but still holding old file descriptor.
It's not possible to change location property in a such state,
so next start will cause old file overwrite if 'append' does not set.
According to man page and POSIX standard about fclose behavior(extract):
------------------------------------------------------------------------
The fclose() function shall cause the stream pointed to by stream
to be flushed and the associated file to be closed.
...
Whether or not the call succeeds, the stream shall be disassociated
from the file and any buffer set by the setbuf() or setvbuf()
function shall be disassociated from the stream.
...
The fclose() function shall perform the equivalent of a close()
on the file descriptor that is associated with the stream
pointed to by stream.
After the call to fclose(), any use of stream results
in undefined behavior.
------------------------------------------------------------------------
So file is in 'closed' state no matter if fclose succeed or not.
And cleanup could be continued.
https://bugzilla.gnome.org/show_bug.cgi?id=757596
The input of queue/queue2 might have DTS set, in which cas we want
to take that into account (instead of the PTS) to calculate position
and queue levels.
https://bugzilla.gnome.org/show_bug.cgi?id=756507
In order to accurately determine the amount (in time) of data
travelling in queues, we should use an increasing value.
If buffers are encoded and potentially reordered, we should be
using their DTS (increasing) and not PTS (reordered)
https://bugzilla.gnome.org/show_bug.cgi?id=756507
Previously this code was just blindly setting the cached flow return
of downstream to GST_FLOW_OK when we get a SEGMENT.
The problem is that this can not be done blindly. If downstream was
not linked, the corresponding sinqlequeue source pad thread might be
waiting for the next ID to be woken up upon.
By blindly setting the cached return value to GST_FLOW_OK, and if that
stream was the only one that was NOT_LINKED, then the next time we
check (from any other thread) to see if we need to wake up a source pad
thread ... we won't even try, because none of the cached flow return
are equal to GST_FLOW_NOT_LINKED.
This would result in that thread never being woken up
https://bugzilla.gnome.org/show_bug.cgi?id=756645
This way one can define new tracing probes without changing the core. We are
using our own quark table, as 1) we only want to initialize them if we're
tracing, 2) we want to share them with the tracers.
Instead of a single invoke() function and a 'mask', register to individual
hooks. This avoids one level of indirection and allows us to remove the
hook enums. The message enms are now renamed to hook enums.
Get the number of cpus and scale process cpu-load accordingly. Switch the
cpuload to be per-mille to get smoother graphs. Add a bit more logging and use
the _OBJECT variant.
Keep tracer base class in tracer and move core support into the utils module.
Add a unstable-api guard to the tracer.h so that external modules would need to
acknowledge the status by setting GST_USE_UNSTABLE_API.
Iterator may need to be resynced, for instance if pads are released
during state change.
got_eos should be protected by the object lock of the element, not of
the pad, as is the case throughout the rest of the funnel code.
https://bugzilla.gnome.org/show_bug.cgi?id=755343
After doing gst_pad_push() in case of sync_streams and cache_buffers,
if the buffer can not be kept in cache, it should be unreffed to avoid
memory leackage.
https://bugzilla.gnome.org/show_bug.cgi?id=755141
This caused problems with oggdemux when queue2 was
operating in queue mode and the souphttpsrc upstream
is not seekable because the server doesn't support
range requests. It would then still claim seekability
and then things go wrong from there.
This reverts commit 7b0b93dafe.
https://bugzilla.gnome.org/show_bug.cgi?id=753887
upstream_size can be negative but queue->upstream_size is unsigned type.
to get a chance to update queue->upstream_size in gst_queue2_get_range()
it should keep the default value.
https://bugzilla.gnome.org/show_bug.cgi?id=753011
Resetting the flushing state of the pads at the end of the
PAUSED_TO_READY transition will make pads handle serialized
queries again which will wait for non-active pads and might
cause deadlocks when stopping the pipeline.
Move the reset to the READY_TO_PAUSED instead.
https://bugzilla.gnome.org/show_bug.cgi?id=752623
Writing a test for unscheduling the gst_clock_id_wait inside the
identity element, found an invalid read, caused by removing the clock-id
when calling _unschedule instead of letting the code calling _wait remove
the clock-id after being unscheduled.
https://bugzilla.gnome.org/show_bug.cgi?id=752055
Microoptimisation: Let GstQueueArray store our
item struct. That way we don't have to alloc/free
temporary QueueItem slices for every item we want
to put into the queue.
https://bugzilla.gnome.org/show_bug.cgi?id=750149
Have all sections in alphabetical order. Also make the macro order consistent.
This is a preparation for generating the file. Remove GET_CLASS macro for
typefine element, since it is not used and the header is not installed.
event can't be NULL, it has been dereferenced by GST_EVENT_TYPE (), and no
case frees the pointer. Remove unnecessary check which will always be True.
CID #1308955
This disables the segment.base adjustments, which is useful if downstream
takes care of base adjustments already (example: a combination of concat
and streamsynchronizer)
https://bugzilla.gnome.org/show_bug.cgi?id=751047
If we receive more than UIO_MAXIOV (1024 typically) buffers
in a single writev call, fall back to consolidating them
into one output buffer or multiple write calls.
This could be made more optimal, but let's wait until it's
ever a bottleneck for someone
If the reset_time value of a FLUSH_STOP event is set to TRUE, the pipeline
will have the base_time of its elements reset. This means that the concat
element's current_start_offset has to be reset to 0, since it was
calculated with the old base-time in mind.
Only FLUSH_STOP events coming from the active pad are looked at.
Signed-off-by: Carlos Rafael Giani <dv@pseudoterminal.org>