When seeking back to original state after duration seeks, let
upstream know that we want the whole file, including the last
byte that wasn't requested on the duration seeks.
https://bugzilla.gnome.org/show_bug.cgi?id=724633
video time uses the 'segment' and the text time should use
the 'text_segment'.
If different segments are used for video and text it would
lead to out of sync video/subtitles.
A change in gst_ogg_demux_do_seek caused oggdemux to wait for
a page for each of the streams, including a skeleton stream if
one was present. Since Skeleton only has header pages, that
was never going to end well.
Also, the code was skipping CMML streams when looking for pages,
so would also have broken on CMML streams.
Thus, we change the code to disregard Skeleton streams, as well
as discontinuous streams (such as CMML and Kate). While it may
be desirable to consider Kate streams too (in order to avoid
losing a subtitle starting near the seek point), this may be
a performance drag when seeking where no subtitles are. Maybe
one could add a "give up" threshold for such discontinuous
streams, so we'd get any page if there is one, but do not end
up reading preposterous amounts of data otherwise.
In any case, it is important that the code that determines
the amount of streams to look pages for remains consistent with
the "early out" conditions of the code that actually parses
the incoming pages, lest we never decrease the pending counter
to zero.
This fixes seeking on a file with a skeleton track reading all
the file on each seek.
https://bugzilla.gnome.org/show_bug.cgi?id=719615
Ogg data is read chunk by chunk, and the chunk size used was
originally taken from libvorbisfile. However, this value leads
to poor performance when used on an Ogg file with large pages
(Ogg pages can be close to 64 KB).
We can't just use a larger chunk size, since this will decrease
performance on small page streams, so we use an adaptive scheme
where the chunk size is twice the largest page size we've seen
so far in the stream. For "typical" Ogg/Vorbis, this gives us
almost the same chunk size (a bit lower), and this lets us get
better performance on streams with large pages.
In case we receive a flush event before having our caps set, we will
end up trying to create a theora encoder even though we are not ready.
Avoid that situation making sure we are initialized before accepting to
be flushed.
https://bugzilla.gnome.org/show_bug.cgi?id=709858
The initial support for the new ALSA chmap API.
Just translate the current chmap to GstAudioChannelPosition during the
setup. No function to specify the channel map manually yet, so still
impossible to assign any non-standard positions or to configure in a
different order even if the hardware allows.
https://bugzilla.gnome.org/show_bug.cgi?id=709755
Store the seek stop and seqnum and properly restore them when
receiving the corresponding Segment from upstream. Also fixes
seqnum for converted seek events.
When bisecting after an earliest time has been found, we need
to only consider the stream for which the earliest time was found.
Before, the following scenario could be and was encountered:
a) Find the earliest time for stream X
b) bisect and find a page which granuletime is indeed < target, but
contains another stream.
c) decide to seek at the wrong offset, sometimes inferior to
the real one, in which case the error was undected or
d) the offset was superior, and thus the actual target keyframe was
not processed, and packets were skipped waiting
for a granulepos.
https://bugzilla.gnome.org/show_bug.cgi?id=700537
The problem experienced is that the EOS was never emitted by oggmux during a
rendering with GES. The proposed patch checks if the pad is EOS before deciding
it's the "best pad".
https://bugzilla.gnome.org/show_bug.cgi?id=699792
If our previous flow return was NOT_LINKED, don't try to push on the pads some
more. If we get a RECONFIGURE event on the pad, try to push on it again.
Changed the check to a current_time equal to the stop will produce
EOS instead of the next one. Also, segment.start can't be NONE, so removing
this check.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=696899
gst_query_set_nth_allocation_pool() requires there to be a pool in the
query already. This is not always the case when we get the query from
upstream. Use gst_query_add_allocation_pool() instead in such case.
https://bugzilla.gnome.org/show_bug.cgi?id=681719
Instead, remember we need a keyframe, and we will force the encoder
to emit one next time we submit a new frame.
Since libtheora does not have an API to request a keyframe, we reset
the max keyframe interval to 1 temporarily.
This has the advantage that the rate control keeps its history,
and that the encoder won't choose different quant tables or
somesuch, thus requiring new streamheaders (although this is
probably only a theoretical possibility). Should also be a
bit faster than resetting the encoder.
https://bugzilla.gnome.org/show_bug.cgi?id=663350
The current code is memsetting the GstVideoFrame.data address to 0s (which
causes a segfault). This member is actually an array of data buffers (one for
each plane). This fix iterates over each data plane to clear them all.
https://bugzilla.gnome.org/show_bug.cgi?id=695655
No need to copy buffers we put into the streamheader any more
now that we don't put caps on buffers any more, so there's no
danger of a refcount cycle.
No need to copy buffers we put into the streamheader any more
now that we don't put caps on buffers any more, so there's no
danger of a refcount cycle.
No need to copy buffers we put into the streamheader any more
now that we don't put caps on buffers any more, so there's no
danger of a refcount cycle.
Really really fix attribute list handling by taking a
copy of the original attributes that pango_attr_list_filter
can mutate, but keep the original around intact to restore
later.
The root cause is that alsa-lib is not thread safe for the same handle.
There are two threads in the gstreamer accessing alsa-lib not serilized.
The race condition happens when one thread holds the old framebuffer app_ptr
position in the kernel, another thread advances the framebuffer app_ptr.
when the former thread is scheduled to run again, it overwrites the app_ptr
to old value by copying from kernel.Thus,the app_ptr in the upper
alsa-lib(pcm_rate) become one period size more advanced than the lower
alsa-lib(pcm_hw & kernel).
gstreamer uses noblock and poll method to communicate with the alsa-lib.
The app_ptr unsync situation as described above makes the poll return immediately because
it concludes there is enough space for the ring-buffer via the low-level alsa-lib.
The write function returns immediately because it concludes there is not enough
space for the ring-buffer from the upper-level alsa-lib. Then the loop of poll
and write runs again and again until another period size is available for
ring-buffer.This leads to the cpu 100 problem.
delay_lock is used to avoid the race condition.
Fixes: https://bugzilla.gnome.org/show_bug.cgi?id=690937
Don't loop forever if an USB audio device gets disconnected
while in use. Post an error message instead. This is not
enough yet though, we still need to make the base class
and/or the ring buffer bail out.
https://bugzilla.gnome.org/show_bug.cgi?id=690197
The format probing code was assuming there'd be one caps
structure for each separate width/depth combination like
we did in 0.10 all over the place: for one, we'd query
unsigned/signed formats together for the same width/height,
and we'd add the entire current structure to the probed
caps when we find a format is supported. Now that we have
all raw formats in a single structure, this is all not going
to work so well any more. We added the entire structure with
all possible formats to the caps if we support just one format.
Fix probing so that we only return the list of actually
supported raw audio formats (with native endianness) from
get_caps().
Make it possible for subclasses to provide the timestamp (as an absolute time
against the pipeline clock) of the last read data.
Fix up alsa to provide the timestamp received from alsa. Because the alsa
timestamps are in monotonic time, we can only do this when the monotonic clock
has been selected as the pipeline clock.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=635256
This reverts part of "visual: enable commented out code again."
(commit 8222ba16c8).
The shader code does indeed look broken (or rather,
it makes assumptions that are not necessarily true here,
namly that pixel stride is 4, for example), which
makes totem very crashy and causes other weird behaviour.
Also see https://bugzilla.gnome.org/show_bug.cgi?id=683527
gst_element_class_get_pad_template() does not return a ref,
so we mustn't unref the template returned. Fixes crashes
when switching back and forth between different types of
subtitle streams.
Add support for GstVideoMeta and GstVideoFrame.
Remove some redundant fields that are also in GstVideoInfo
Disable the shader code, it looks broken.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=681719
We don't really care about what's inside those channels. This also makes the
caps valid because now it's no longer possible to have channels=1 and a mask
of 0x3.
Pick delta pad earlier during header parsing, and pick it based
on whether it's a video stream or not rather than some rather
byzantine signalling from theoraenc etc. which would set the delta
flag on header packets which oggmux would then pick up and determine
that this is a "delta-able" stream.
Since the new videodecoder-based theoraenc didn't do that any more,
we would only see the first delta flag on the second video packet,
which is after we've already muxed a few audio packets flagged as
key units, which trips up the unit test.
Fixes pipelines/oggmux unit test.
https://bugzilla.gnome.org/show_bug.cgi?id=679958
Add a copy of the base class until it is stable. Right now the extra effects of
the baseclass are not supported as the sublass overwrites the buffer instead of
blending.
We now support pre-multiplied alpha in the overlay composition API,
and can avoid multiple conversions if the the overlay also supports
pre-multiplied alpha. We should probably also have mapped the
buffer as READWRITE when unpremultiplying.
A crafted file with invalid pages will cause repeated searches from
earlier offsets in steps of 8500 bytes, but reading till the end of
the stream. Since we know the maximum size of an Ogg page, we can
bound the search for next page, to get a linear behavior (though
still not good enough as it will read the entire file backwards if
there's no valid page till then).