Use custom code to implement flush-stop, we can't reuse the set_flushing code
because we can't touch the live_playing flag and we need to signal the
streaming thread.
In some specific cases (like transmuxing) we want to force the element
to actually parse all incoming data even if the element deems it is not
necessary.
This property simply ignores requests from the element to enable passthrough
mode which results in processing always being enabled.
https://bugzilla.gnome.org/show_bug.cgi?id=705621
Adds a variant of the _push function that doesn't check the queue limits
before adding the new item. It is useful when pushing an element to the
queue shouldn't lock the thread.
One particular scenario is when the queue is used to serialize buffers
and events that are going to be pushed from another thread. The
dataqueue should have a limit on the amount of buffers to be stored to
avoid large memory consumption, but events can be considered to have
negligible impact on memory compared to buffers. So it is useful to be
used to push items into the queue that contain events, even though the
queue is already full, it shouldn't matter inserting an item that has
no significative size.
This scenario happens on adaptive elements (dashdemux / mssdemux) as
there is a single download thread fetching buffers and putting into the
dataqueues for the streams. This same download thread can als generate
events in some situations as caps changes, eos or a internal control
events. There can be a deadlock at preroll if the first buffer fetched
is large enough to fill the dataqueue and the download thread and the
next iteration of the download thread decides to push an event to this
same dataqueue before fetching buffers to other streams, if this push
locks, the pipeline will be stuck in preroll as no more buffers will be
downloaded.
There is a somewhat common practice in dash streams to have a single
very large buffer for audio and one for video, so this will always
happen as the download thread will have to push an EOS right after
fetching the first buffer for any stream.
API: gst_data_queue_push_force
https://bugzilla.gnome.org/show_bug.cgi?id=705694
The current documentation is controverse, while it states that the
returned value is valid only while the query is is valid, which presumes
a 'transfer none' policy. But the tooltip for the 'out' annotation
states the default is 'transfer-full'.
Add the missing 'transfer none' annotations to fix this.
When the range for a property is defined as -INT_MAX-1 .. INT_MAX, like
the xpos in a videomixer the following expression in the macro
definitions of convert_g_value_to_##type (and the equivalent in
convert_value_to_##type)
v = pspec->minimum + (g##type) ROUNDING_OP ((pspec->maximum - pspec->minimum) * s);
are converted to:
v = -2147483648 + (g##type) ROUNDING_OP ((2147483647 - -2147483648) * s);
(2147483647 - -2147483648) overflows to -1 and the net result is:
v = -2147483648 + (g##type) ROUNDING_OP (-1 * s);
so v only takes the values -2147483648 for s == 0 and 2147483647
for s == 1.
Rewriting the expression as minimum*(1-s) + maximum*s gives the correct
result in this case.
https://bugzilla.gnome.org//show_bug.cgi?id=705630
When in download buffering mode queue2 didn't check if a range offset is
in a undownloaded range before the currently in-progress range. Causing
seeks to an earlier offset to, well, take a while.
Calling gst_buffer_get_size represented 2/3 of the cost of helper_find_peek
which was called whenever a typefindfunction wanted to peek at data.
We already know the size (from the GstMapInfo), so just use that.
Tweak the documentation slightly to clarify that the estimated-total in
a a Buffering query the total remaining time of a download, not the
total time for the complete download. Also indicate the unit used.
https://bugzilla.gnome.org/show_bug.cgi?id=704934
When asked about the scheduling flags first check with upstream and
simply add the _SEEKABLE flag when using a temporary file as storage.
This enables the forwarding of _SEQUENTIAL and _BANDWIDTH_LIMITED from
sources if needed.
https://bugzilla.gnome.org/show_bug.cgi?id=704927
A new active pad might not be notified in some cases, which results
in the current track number not being set in playbin.
The active-pad notification is only sent in the chain and sink_event
functions, and only when the buffer or event that triggered the active
pad selection is from the newly activated pad. So in the other case
the notification will never be sent.
https://bugzilla.gnome.org/show_bug.cgi?id=704691
If all stream-start messages had a group id (for backwards compatibility),
we only consider a stream started if all had the same group id.
In 2.0 we should make the group id mandatory.
All streams that have the same group id are supposed to be played
together, i.e. all streams inside a container file should have the
same group id but different stream ids. The group id should change
each time the stream is started, resulting in different group ids
each time a file is played for example.