The offset can be -1 when we are configured in TIME format. Instead of
failing the seek and erroring, do what and offset of -1 is supposed to
do and simply read from the current offset.
It was used for pad-alloc in 0.10 but currently is completely unused
and not necessary. All pad access is protected by the tee object lock
and keeping another reference to the current pad.
This makes buffering stop in case a stream switch happens. This is
important for adaptive streams that can disable not-linked streams
to avoid consuming the network bandwidth.
https://bugzilla.gnome.org/show_bug.cgi?id=719575
After patch bda406c4, the state of the singlequeue was set to OK, but nothing
would then wake up the thread, as the other wakeup functions only look at
singlequeues that are marked as having received as not-linked.
https://bugzilla.gnome.org/show_bug.cgi?id=708200
* add many missing declarations to sections
* GstController has been removed, update docs
* skip GstIndex when generating documentation
* rephrase so gtkdoc doesn't imagine return value
* add missing argument description for gst_context_new()
* document GstOutputSelectorPadNegotiationMode and move to header-file
https://bugzilla.gnome.org/show_bug.cgi?id=719614
Use gap events to advance the selector's pad position.
This is relevant to keep sync_streams mode working when one of the
streams doesn't have data all the time.
Since the refactoring of GstContext (commits
qc9fa2771b508e9aaeecc700e66e958190476f,
a7f5dc8b8a,
690326f906dc82e41ea58b81cdb2e3e88b754,
d367dc1b0d4ecb37f4d27267e03d7bf0c6c06a6, and
82d158aed3f2e8545e1e7d35085085ff58f18) I am no longer able to get
a shared context for an element that is used twice in a pipeline.
I used the documentation and eglglessink as my reference for
implementing the GstContext logic.
As the code was tied to a hardware decoder, I have ported the
GstContext code to fakesink to show the problem. Using the old
API a single ExampleMgr instance is created, but using the new
API each element is creating its own instance.
In some cases the wait for more data was happening without updating
the buffering state, meaning the API user would not be able to notice
it should pause the pipeline and update UI to indicate that is the
case, the video would likely stutter instead.
https://bugzilla.gnome.org/show_bug.cgi?id=707648
If the multiqueue has automatically grown chances are good that
we will cause the pipeline to starve if the maximum level is reduced
below that automatically grown size.
https://bugzilla.gnome.org/show_bug.cgi?id=707156
When a buffering query is handled it uses the get_buffering_percent()
function to get some statitics. Unfortunately this function also
calculates whether the queue should be buffering and adapts the
global queue2 state in case of state transitions from/to buffering
(including whether a buffering message was posted on the bus!).
This means that there is a race which can cause buffering messages
to never posted if the global state changes happen as a result of aa
query instead of resulting from bytes flowing in/out.
Spotted by Sjoerd Simons.
Change to only query state in get_buffering_percent() and update
state only in update_buffering().
https://bugzilla.gnome.org/show_bug.cgi?id=705332
When in download buffering mode queue2 didn't check if a range offset is
in a undownloaded range before the currently in-progress range. Causing
seeks to an earlier offset to, well, take a while.
When asked about the scheduling flags first check with upstream and
simply add the _SEEKABLE flag when using a temporary file as storage.
This enables the forwarding of _SEQUENTIAL and _BANDWIDTH_LIMITED from
sources if needed.
https://bugzilla.gnome.org/show_bug.cgi?id=704927
A new active pad might not be notified in some cases, which results
in the current track number not being set in playbin.
The active-pad notification is only sent in the chain and sink_event
functions, and only when the buffer or event that triggered the active
pad selection is from the newly activated pad. So in the other case
the notification will never be sent.
https://bugzilla.gnome.org/show_bug.cgi?id=704691
We must be certain that we don't cause a deadlock when blocking the serialized
queries. One such deadlock can happen when we are buffering and downstream is
blocked in preroll and a serialized query arrives. Downstream will not unblock
(and allow our query to execute) until we complete buffering and buffering will
not complete until we can answer the query..
https://bugzilla.gnome.org/show_bug.cgi?id=702840
Otherwise we might get deadlocks caused by lock order inversion:
During the chain function the stream lock is first locked and then the
inputselector lock. During pad release we first locked the inputselector
lock and then deactivating the pad would lock the stream lock.
There's no reason why the inputselector lock should be required while
deactivating and removing the pad, it's only needed before.
https://bugzilla.gnome.org/show_bug.cgi?id=704002
We must be certain that we don't cause a deadlock when blocking the serialized
queries. One such deadlock can happen when we are buffering and downstream is
blocked in preroll and a serialized query arrives. Downstream will not unblock
(and allow our query to execute) until we complete buffering and buffering will
not complete until we can answer the query..
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=702840
During FLUSH_START the query needs to be unblocked already, otherwise
it can lead to deadlocks if the FLUSH_START is the result of something
done from the streaming thread of the srcpad (the queue will never be
emptied!).
Fixes some deadlocks during flushing.
And store queue items differently to not accidentially read
already unreffed queries when flushing. Queries are owned by
upstream and not us.
In the case the source has no caps, caps must be sent before segment. This
fixes few unit tests that where failing due to the new misordering warning.
https://bugzilla.gnome.org/show_bug.cgi?id=699968
There's no point in storing them and sending them later, and doing so would
later require to distinguish between events that should come before caps and
after.
https://bugzilla.gnome.org/show_bug.cgi?id=692784
API: GST_PLUGIN_STATIC_DECLARE()
API: GST_PLUGIN_STATIC_REGISTER()
Based on a patch by Håvard Graff <havard.graff@tandberg.com>.
This now allows GST_PLUGIN_DEFINE() to create a static plugin if
GST_PLUGIN_BUILD_STATIC is defined. The resulting plugin can be
statically linked or dynamically linked during compilation but
can't be dynamically loaded during runtime.
Also adds GST_PLUGIN_STATIC_DECLARE() and GST_PLUGIN_STATIC_REGISTER(),
which allows to register a static linked plugin easily.
It is still required to manually register every single statically linked
plugin from inside the application as this can't be automated in a portable
way.
A new configure parameter --enable-static-plugins was added that allows
to build all plugins we build here as static plugins.
Fixes bug #667305.
When querying a queue that is flushing we end up adding
a query to the queuearray without taking a reference to
that query (because the normal functionality is to block
until that query is done and discarded from the queue).
This later causes problem if the query is unreffed outside
of the queue before we discard the queue. There is a check
to avoid unreffing any lingering query-objects, but since
the query has been deleted that check fails.
This commit depends on other fixes done to gst_queue_array_find()
and gst_queue_array_drop_element().
https://bugzilla.gnome.org/show_bug.cgi?id=692691
The control of wheteher a SingleQueue is full is not correct.
Rewrote single_queue_overrun_cb() so it checks the correct variables
when checking if the queue has reached the hard limits, and to
increase the max buffer limit once for each call.
https://bugzilla.gnome.org/show_bug.cgi?id=690557
Implement the same behaviour as gst_pad_push_event when pushing sticky events
fails, that is don't fail immediately but fail when data flow resumes and upstream
can aggregate properly.
This fixes segment seeks with decodebin and unlinked audio or video branches.
Fixes: https://bugzilla.gnome.org/show_bug.cgi?id=687899
In flush-on-eos=true mode any data remaining in the queue is
discarded when an EOS event is received, and the EOS passed
downstream as soon as possible (instead of waiting for all
buffers in the queue to get processed by downstream first).
May or may not be useful in capture/encoding scenarios.
Basetransform should not try to negotiate in passthrough mode but
respect the order of what we return in the transform_caps method.
A typical case is that you specify some specific new caps in the
caps property but also allow the current caps to pass.
Fix race that could cause data corruption when seeking in ring buffer
mode.
In perform_seek_to_offset(), called from the demuxer's pull_range
request, we drop the lock, tell upstream (usually a http source)
to seek to a different offset, then re-acquire the lock before we
do things to the ranges. However, between us sending the seek event
and re-acquiring the lock, the source thread might already have pushed
some data and moved along the range's writing_pos beyond the seek
offset. In that case we don't want to set the writing position back
to the requested seek position, as it would cause data to be written
to the wrong offset in the file or ring buffer.
Reproducible doing seek-emulated fast-forward/backward on 006653.
Conflicts:
plugins/elements/gstqueue2.c
Only one STREAM_START event should be let through, else it will
confuse downstream elements that think a new stream is starting
whereas in fact we are just switching to a different input.
In the future we might want to let them through but with the same
sequence number.
This guarantees a bit more consistency in which input stream will
be selected by default. It would previously be the first pad on which
an event/buffer/query was received ... which was racy and non-predictable.
The buffering-left field in the buffering message should contain a time estimate
in milliseconds about for long the buffering is going to take. We can calculate
this value when we do rate_estimates.
Only consider the queue empty if the minimum thresholds
are not reached and data is at the queue head. Otherwise
we would block forever on serialized queries.
This also makes sending of serialized events, like caps, happen
faster and potentially improves negotiation performance.
Fixes bug #679458.
Postpone the #ifdef to a point after glib.h (via gstfdsink.h) is included
so that the needed defines and header includes can be done correctly,
especially on Visual C++ builds.
https://bugzilla.gnome.org/show_bug.cgi?id=679112
Save the value of the pad's got_eos in gst_funnel_release_pad,
before calling gst_element_remove_pad. This is because
gst_element_remove_pad may free the pad.
https://bugzilla.gnome.org/show_bug.cgi?id=678017
If multiple sources are plugged into the funnel and one of the
sources emits an EOS, that event is propogated through the funnel
even though other sources connected to the funnel may still be
pushing data. This patch waits to send an EOS event until the
funnel has received an EOS event on each sinkpad.
Ported from d397ea97 in 0.10 branch.
This adds properties to use the clock time for deciding when
to drop buffers for inactive pads and a property to buffer all
not rendered buffers for the active pad to allow pad switching
without losing any buffers at all.
Conflicts:
plugins/elements/gstinputselector.c
In commit bf0964b6 a check for pad is activated was not carried.
This leads to attempt to pull while in push mode when force_caps
is set. In this case without the attached check even when activated
in pull mode we activate back to push mode.
This is from comment in previous code , case number eight:
8. if the sink pad is activated, we are in pull mode. succeed.
- otherwise activate both pads in push mode and succeed.
Putting it back fixes playback of webm in webkit+gstreamer 1.0 .
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=676003
When we don't have the requested data in the ringbuffer and we move our read
pointer to the requested position, signal the delete cond to inform the writer
that we changed the current fill level. If we don't, the writer might stay
blocked and we might wait forever.
When we don't have enough bytes in the ringbuffer to satisfy the current
request, first update the current read position before waiting. If we don't do
that, the ringbuffer might appear full and the writer will never write more
bytes to wake us up.
Only add the range when we receive a segment event on the sinkpad. The add_range
method will modify the write position, which only makes sense to do on the
sinkpad.
Set the seeking flag right before we send a seek event upstream and discard all
data untill we see a flush-stop again. We need to do this because we activate
the range that we seek to immediately after sending the seek event and it is
possible that we receive data in our chain function from before the seek
which would then be added to the wrong range resulting in data corruption.
When using the ringbuffer, handle the newsegment event like we handle it when
using the temp-file mode: create a new range for the new byte segment. The new
segment should normally already be created when we do a seek.
Doesn't actually change the default value, just makes use of the
define there is. Superficial testing with fakesink and jpegdec did
not reveal improved performance for bigger block sizes, so leave
default as it is.
A flush from the upstream element should not make buffering go to 0, the next
pull request might be inside a range that we have and then we don't need to
buffer at all. If the next pull is outside anything we have, buffering will
happen as usual anyway.
We want to forward the flush events received on the sinkpad whenever the srcpad
is activated in pushmode, which can also happen when using the RINGBUFFER or
DOWNLOAD mode and downstream failed to activate us in pull mode.
When we have EOS, read the remaining bytes in the buffer and make sure we don't
wait for more data. Also clip the output buffer to the amount of remaining
bytes.
When using the ringbuffer mode, the buffer is filled when we reached the
max_level.bytes mark or the total size of the ringbuffer, whichever is smaller.
Use a threshold variable to hold the maximum distance from the current position
for with we will wait instead of doing a seek.
When using the ringbuffer and the requested offset is not available, avoid
waiting until the complete ringbuffer is filled but instead do a seek when the
requested data is further than the threshold.
Avoid doing the seek twice in the ringbuffer case.
Use the same threshold for ringbuffer and download buffering.
Remove GST_MAJORMINOR and replace it by GST_API_VERSION
Also set GST_VERSION_{MAJOR,MINOR,MICRO,NANO} explicitely
now.
All versions are at 1.0.0 now for the release soon but
API/ABI can still change until the 1.0.0 release.
Next release versions until 1.0.0 will be 0.10.9X and
these will be release candidates. GST_VERSION_* will
nonetheless stay at 1.0.0.0.
gst_buffer_take_memory -> gst_buffer_insert_memory because insert is what the
method does.
Make all methods deal with ranges so that we can replace, merge, remove and map
a certain subset of the memory in a buffer. With the new methods we can make
some code nicer and reuse more code. Being able to deal with a subset of the
buffer memory allows us to optimize more cases later (most notably RTP headers
and payload that could be in different memory objects).
Make some more convenient macros that call the more generic range methods.
We reset all the waiting streams, let them push another buffer to
see if they're now active again. This allows faster switching
between streams and prevents deadlocks if downstream does any
waiting too.
Also improve locking a bit, srcresult must be protected by the
multiqueue lock too because it's used/set from random threads.
Otherwise we might block forever because upstream (e.g. multiqueue) is waiting
for the previously active stream to return forever (which is waiting here
in inputselector) before pushing something on the newly selected stream.
Improve the docs of the get/pull_range functions, define the lifetime of the
buffer in case of errors and short reads.
Make sure the code does what the docs say.
Make it so that one can specify a buffer for get/pull_range where the downstream
element should write into. When passing NULL, upstream should allocate a buffer,
like in 0.10.
We also need to change the probes a little because before the pull probe, there
could already be a buffer passed. This then allows us to use the same PROBE
macro for before and after pulling.
While we're at the probes, make the query probe more powerful by handling the
GST_PAD_PROBE_DROP return value. Returning _DROP from a query probe will now
return TRUE upstream and will not forward the probe to the peer or handler.
Also handle _DROP for get/pull_range properly by not dispatching to the
peer/handler or by generating EOS when the probe returns DROP and no buffer.
Make filesrc handle the non-NULL buffer passed in the get_range function and
skip the allocation in that case, writing directly into the downstream provided
buffer.
Update tests because now we need to make sure to not pass a random value in the
buffer pointer to get/pull_range
Group the extra allocation parameters in a GstAllocationParams structure to make
it easier to deal with them and so that we can extend them later if needed.
Make gst_buffer_new_allocate() take the GstAllocationParams for added
functionality.
Add boxed type for GstAllocationParams.
Rename _do_simplify() to _simplify(). The name was introduced as a replacement
method for a deprecated method but we can now rename it again.
Fix some docs.
Make gst_caps_do_simplify() take ownership of the input caps and produce a
simplified output caps. This removes the requirement of having writable input
caps and the method can make the caps writable only when needed.
Rename gst_base_transform_suggest to gst_base_transform_reconfigure_sink because
that is what it does. Also remove the caps and size because that is not needed.
Rename gst_base_transform_reconfigure to gst_base_transform_reconfigure_src.
Remove some old unused code in capsfilter.
Make it possible to configure a GDestroyNotify and user_data for
gst_memory_new_wrapped() this allows for more flexible wrapping of foreign
memory blocks.
Remove the link functions and always start the pad task on the srcpad. If
applications need to autoplug they can put a blocking probe on the srcpad like
they would with any other element.
Default to not creating lots of overhead by doing a couple of
g_strdup_printf()/g_free() per buffer or event just to generate
a last-message update that rarely anyone listens to. This means
that you need to enable silent=true explicitly in order to get
last-message dumps in gst-launch -v now. On the upside, people
won't inadvertently end up benchmarking g_strdup_printf()
performance instead of gstreamer data handling performance any
more.
Maybe the silent property should be renamed to enable-last-message
or something like that?